Spring 2026: Math 291 Homework

Any page and section numbers in the assignments below refer to Heffron's text.

Tuesday, January 20

1. Verify properties 1-8 from today's lecture, for \(A = \begin{pmatrix} 1 & -9\\2 & 6\end{pmatrix}\), \(B = \begin{pmatrix} 1 & 4\\0 & -9\end{pmatrix}\), \(C = \begin{pmatrix}0 & -4\\9 & 2\end{pmatrix}\), \(\lambda = 7, \lambda_1 = -6, \lambda_2 = 4\).

Solution. This is straightforward.

2. Give a proof of the cancellation property using entries in the matrices rather than the proof given class.

Solution. Write \(A = \begin{pmatrix} a & b\\c & d\end{pmatrix}\), \(B = \begin{pmatrix} e & f\\g & h\end{pmatrix}\), \(C = \begin{pmatrix} r & s\\t & u\end{pmatrix}\). Then,

\[\begin{pmatrix} a+e & b+f\\c+g & d+h\end{pmatrix} = A+B = A+C = \begin{pmatrix} a+r & b+s\\c+t & d+u\end{pmatrix}.\]

Thus, \(a+e = a+r\), \(b+f = b+s\), \(c+g = c+t\), \(d+h = d+u\). Since we have cancellation for real numbers, \(e = r\), \(f = s\), \(g = t\), \(h = u\), so \(B = C\).

Thursday, January 22

1. For the matrices \(A, B, C\) in the previous assignment, verify:

  1. (i) \(A(B+C) = AB+AC\).
  2. (ii) \(A(BC) = (AB)C\).

Solution. We just check (ii). \(AB = \begin{pmatrix} 1 & 85\\2 & -46\end{pmatrix}\), so \((AB)C = \begin{pmatrix} 765 & 166\\-414 & -100\end{pmatrix}\). On the other hand we have \(BC = \begin{pmatrix} 36 & 4\\-81 & -18\end{pmatrix}\), so \(A(BC) = \begin{pmatrix} 765 & 166\\-414 & -100\end{pmatrix}\).

2. For the matrix \(A = \begin{pmatrix} 3 & 1\\5 & 2\end{pmatrix}\) first verify that \(A^{-1} = \begin{pmatrix} 2 & -1\\-5 & 3\end{pmatrix}\), and then use \(A^{-1}\) to solve the system of equations

\[\begin{align*} 3x+y &= 7\\ 5x+2y &= -3. \end{align*}\]

Solution. It's easy to check that \(AA^{-1} = I_2 = A^{-1}A\). To solve the system, one starts with the matrix equation \(A\begin{pmatrix} x\\y\end{pmatrix} = \begin{pmatrix} 7\\-3\end{pmatrix}\), multiplying both sides of this equation by \(A^{-1}\) gives

\[\begin{pmatrix} x\\y\end{pmatrix} = \begin{pmatrix} 2 & -1\\-5 & 3\end{pmatrix} \cdot \begin{pmatrix} 7\\-3\end{pmatrix} = \begin{pmatrix} 17\\-44\end{pmatrix},\]

so \(x = 17, y = -44\).

3. Use mathematical induction to prove the following statements:

  1. (i) \(1^2+2^2+3^2+\cdots + n^2 = \frac{n(n+1)(2n+1)}{6}\), for all \(n\geq 1\).
  2. (ii) \(9^n-1\) is divisible by 8, for all \(n\geq 1\).

Solution. (i) For the base case, \(1 = \frac{1(1+1)(2\cdot 1+1)}{6}\), as required. Now, assume the formula holds for \(n-1\) and use this to prove the case \(n\):

\[\begin{align*} 1^2+2^2+3^2+\cdots + (n-1)^2 &= \frac{(n-1)(n)(2(n-1)+1)}{6}\\ 1^2+2^2+3^2+\cdots + (n-1)^2 &= \frac{(n-1)(n)(2n-1)}{6}, \ \text{adding}\ n^2\ \text{both sides, we get}\\ 1^2+2^2+3^2+\cdots + (n-1)^2 +n^2 &= \frac{(n-1)(n)(2n-1)}{6} +n^2\\ 1^2+2^2+3^2+\cdots + (n-1)^2 +n^2 &= \frac{2n^3+3n^2+n}{6}\\ 1^2+2^2+3^2+\cdots + n^2 &= \frac{n(n+1)(2n+1)}{6} \end{align*}\]

For (ii), it turns out induction is not needed. If one uses the identity \(x^n -1= (x-1)(x^{n-1}+x^{n-2}+\cdots + x+1)\), substituting \(x = 9\) shows \(8\) divides \(9^n-1\).

Tuesday, January 27

1. For \(A = \begin{pmatrix} 1 & -9\\2 & 6\end{pmatrix}\), \(B = \begin{pmatrix} 1 & 4\\0 & -9\end{pmatrix}\) and \(\lambda = 7\), verify properties 1-5 of the determinant given in today's lecture.

Solution. These are straightforward calculations.

2. For the \(2\times 2\) matrix \(A\), we verified in class that if \(\textrm{det}(A)\not = 0\), then \(A^{-1}\) exists. Prove that if \(A^{-1}\) exists, then \(\det A \not = 0\). Thus, we have the following:

Theorem. The \(2\times 2\) matrix is invertible if and only if \(\det A\not = 0\).

We'll see later in the semester that this holds for any \(n\times n\) matrix.

Solution. We have \(A^{-1}A = I_2\), so \(\det(A)\cdot \det(A^{-1}) = \det(A^{-1}A) = \det I_2 = 1\), so \(\det(A) \not = 0\).

3. Here are three systems of linear equations. Identify which one has a unique solution, infinitely many solutions and no solutions.

System A
\[\begin{align*} 2x + 3y &= 7 \\ 6x + 9y &= 31 \end{align*}\]
System B
\[\begin{align*} 2x + 3y &= -1 \\ 6x + 2y &= 4 \end{align*}\]
System C
\[\begin{align*} 2x + 3y &= 7 \\ 6x + 9y &= 21 \end{align*}\]

Solution. System A has no solution, since the system corresponds to parallel lines, System B has a unique solution \(x = 1, y =-1\), and system C has infinitely many solutions, since each equation describes the same line.

Thursday, January 29

1. For the three systems of equations given in the previous assignment, use augmented matrices and Gaussian elimination to find the solution set of each system.

Solution. For A, we have \(\left[\begin{array}{cc|c} 2 & 3 & 7\\6 & 9 & 31\end{array}\right] \xrightarrow{-3\cdot R_1+R_2} \left[\begin{array}{cc|c} 2 & 3 & 7\\0 & 0 & 10\end{array}\right]\), so the system has no solution. For B, we have

\[\left[\begin{array}{cc|c}2 & 3 & -1\\6 & 2 & 4\end{array}\right] \xrightarrow{\frac{1}{2}\cdot R_1} \left[\begin{array}{cc|c}1 & \frac{3}{2} & -\frac{1}{2}\\6 & 2 & 4\end{array}\right] \xrightarrow{-6\cdot R_1+R_2} \left[\begin{array}{cc|c}1 & \frac{3}{2} & -\frac{1}{2}\\0 & -7 & 7\end{array}\right]\]
\[\xrightarrow{-\frac{1}{7}\cdot R_2} \left[\begin{array}{cc|c}1 & \frac{3}{2} & -\frac{1}{2}\\0 & 1 & -1\end{array}\right] \xrightarrow{-\frac{3}{2}\cdot R_2+R_1} \left[\begin{array}{cc|c}1 & 0 & 1\\0 & 1 & -1\end{array}\right]\]

Therefore, \(x = 1\) and \(y = -1\). For C, \(\left[\begin{array}{cc|c} 2 & 3 & 7\\6 & 9 & 21\end{array}\right]\xrightarrow[\frac{1}{2}\cdot R_1]{-3\cdot R_1+R_2} \left[\begin{array}{cc|c}1 & \frac{3}{2} & \frac{7}{2}\\0 & 0 & 0\end{array}\right]\). Solution set: \(\{(\frac{7}{2}-\frac{3}{2}t, t)\ |\ t\in \mathbb{R}\}\).

2. Something new: Find the solution set to the system of equations below using Gaussian elimination, following as closely as you can the algorithm given in class. Hint: You'll have to introduce a parameter to describe the solution set.

\[\begin{align*} 2x+4y+6z &= 12\\ x+y+z &= 8. \end{align*}\]

Solution. Converting to an augmented matrix, we have

\[\left[\begin{array}{ccc|c}2 & 4 & 6 & 12\\1 & 1 & 1 & 8\end{array}\right]\xrightarrow{R_1\leftrightarrow R_2} \left[\begin{array}{ccc|c}1 & 1 & 1 & 8\\2 & 4 & 6 & 12\end{array}\right] \xrightarrow{-2\cdot R_1+R_2}\left[\begin{array}{ccc|c} 1 & 1 & 1 & 8\\0 & 2 & 4 & -4\end{array}\right]\]
\[\xrightarrow{\frac{1}{2}\cdot R_2}\left[\begin{array}{ccc|c}1 & 1 & 1 & 8\\0 & 1 & 2 & -2\end{array}\right]\xrightarrow{-1\cdot R_2+R_1}\left[\begin{array}{ccc|c}1 & 0 & -1 & 10\\0 & 1 & 2 & -2\end{array}\right].\]

Thus, the solution set is \(\{(10+t, -2-2t, t)\ |\ t\in \mathbb{R}\}\).

3. Suppose that the ordered pair \((s,t)\) is a solution to the system

\[\begin{align*} ax+by &= u\\ cx+dy &= v. \end{align*}\]

Verify that \((s,t)\) is a solution to each of systems below. Assume that \(\lambda \in \mathbb{R}\) and is non-zero for System C.

System A
\[\begin{align*} cx + dy &= v \\ ax + by &= u \end{align*}\]
System B
\[\begin{align*} ax + by &= u \\ (c+\lambda a)x + (d+\lambda b)y &=v+\lambda u \end{align*}\]
System C
\[\begin{align*} ax + by &= u \\ \lambda cx + \lambda dy &= \lambda v \end{align*}\]

Now assume that \((s,t)\) is a solution to A, B, or C, and show that \((s,t)\) is a solution to the original system of equations. You must consider all three cases. These calculations show that solutions to systems of equations are invariant under elementary row operations.

Solution. We'll do the case of B. The other cases are similar, and easier. Suppose \((s,t)\) is a solution to the original system of equations, so that \(as+bt = u\) and \(cs+dt = v\). Clearly the first equation in B holds. For the second equation in B, we have

\[(c+\lambda a)s+(d+\lambda b)t = (cs+dt) + \lambda (as+bt) = v+\lambda u,\]

using the two original equations. Now suppose \((s,t)\) is a solution to the system B. Then clearly \((s,t)\) is a solution to the first equation in the original system. If \(\lambda = 0\), then the second equation in B is the second equation in the original system, so \((s,t)\) satisfies the latter. Now suppose \(\lambda \not = 0\). Then

\[v+\lambda u = (c+\lambda a)s + (d+\lambda b)t = (cs+dt)+\lambda (as+bt) = v+ \lambda (as+bt),\]

so \(\lambda u = \lambda (as+bt)\), therefore, \(u = as+bt\), which is what we want.

Tuesday, February 3

Use Gaussian Elimination to solve problems 2.18 (a)-(f) in Heffron.

Solution. (a) \(\left\{\begin{pmatrix} 6-2t\\t\end{pmatrix}\ \middle|\ t\in \mathbb{R}\right\}\). (b) \(\left\{\begin{pmatrix} 0\\1\end{pmatrix}\right\}\). (c) \(\left\{\begin{pmatrix} 4-t\\-1+t\\t\end{pmatrix}\ \middle|\ t\in \mathbb{R}\right\}\). (d) \(\left\{\begin{pmatrix} 1\\1\\1\end{pmatrix}\right\}\).

(e) \(\left\{\begin{pmatrix} \frac{5}{3}-\frac{1}{3}t_1-\frac{2}{3}t_2\\\frac{2}{3} +\frac{2}{3}t_1+\frac{1}{3}t_2\\t_1\\t_2\end{pmatrix}\ \middle|\ t_1, t_2\in \mathbb{R}\right\}\). (f) No solution.

Thursday, February 5

1. For the matrix \(A = \begin{pmatrix} 2 & 1 & 0\\0 & 4 & 0\\1 & 2 & -1\end{pmatrix}\), use Gaussian elimination to find \(A^{-1}\). Then check that your answer is correct.

Solution. \(\begin{bmatrix}2 & 1 & 0 & | & 1 & 0 & 0\\0 & 4 & 0 & | & 0 & 1 & 0\\1 & 2 & -1 & | & 0 & 0 & 1\end{bmatrix} \overset{R_1\leftrightarrow R_3}{\longrightarrow} \begin{bmatrix}1 & 2 & -1 & | & 0 & 0 & 1\\0 & 4 & 0 & | & 0 & 1 & 0\\2 & 1 & 0 & | & 1 & 0 & 0\end{bmatrix}\)

\(\xrightarrow[\frac{1}{4}\cdot R_2]{-2\cdot R_1+R_3}\begin{bmatrix}1 & 2 & -1 & | & 0 & 0 & 1\\0 & 1 & 0 & | & 0 & \frac{1}{4} & 0\\0 & -3 & 2 & | & 1 & 0 & -2\end{bmatrix}\) \(\xrightarrow[-2\cdot R_2+R_1]{3\cdot R_2+R_3}\begin{bmatrix}1 & 0 & -1 & | & 0 & -\frac{1}{2} & 1\\0 & 1 & 0 & | & 0 & \frac{1}{4} & 0\\0 & 0 & 2 & | & 1 & \frac{3}{4} & -2\end{bmatrix}\)

\(\xrightarrow{\frac{1}{2}\cdot R_3} \begin{bmatrix}1 & 0 & -1 & | & 0 & -\frac{1}{2} & 1\\0 & 1 & 0 & | & 0 & \frac{1}{4} & 0\\0 & 0 & 1 & | & \frac{1}{2} & \frac{3}{8} & -1\end{bmatrix} \xrightarrow{R_3+R_1}\begin{bmatrix}1 & 0 & 0 & | & \frac{1}{2} & -\frac{1}{8} & 0\\0 & 1 & 0 & | & 0 & \frac{1}{4} & 0\\0 & 0 & 1 & | & \frac{1}{2} & \frac{3}{8} & -1\end{bmatrix}\)

Thus, \(A^{-1} = \begin{pmatrix}\frac{1}{2} & -\frac{1}{8} & 0\\0 & \frac{1}{4} & 0\\\frac{1}{2} & \frac{3}{8} & -1\end{pmatrix}\).

2. For the matrix \(A = \begin{pmatrix}2 & 3 & 6\\4 & 8 & 14\end{pmatrix}\)

  1. (i) Use elementary row operations to put \(A\) into RREF.
  2. (ii) Convert the elementary row operations you used in (i) to \(2\times 2\) elementary matrices, and then multiply \(A\) successively on the left by the elementary matrices to get the same RREF.
  3. (iii) Now multiply the elementary matrices from (ii) to get a \(2\times 2\) matrix \(B\). Check that \(BA\) gives the same RREF. Be careful: The order in which you multiply the elementary matrices matters.

Solution. (i) \(\begin{pmatrix} 2 & 3 & 6\\4 & 8 & 14\end{pmatrix}\xrightarrow{\frac{1}{2}\cdot R_1}\begin{pmatrix} 1 & \frac{3}{2} & 3\\4 & 8 & 14\end{pmatrix}\xrightarrow{-4\cdot R_1+R_2}\begin{pmatrix} 1 & \frac{3}{2} & 3\\0 & 2 & 2\end{pmatrix}\xrightarrow{\frac{1}{2}\cdot R_2}\begin{pmatrix} 1 & \frac{3}{2} & 3\\0 & 1 & 1\end{pmatrix}\) \(\xrightarrow{-\frac{3}{2}\cdot R_2+R_1}\begin{pmatrix} 1 & 0 & \frac{3}{2}\\0 & 1 & 1\end{pmatrix}\).

(ii) \(\begin{pmatrix} 1 & -\frac{3}{2}\\0 & 1\end{pmatrix}\begin{pmatrix} 1 & 0\\0 & \frac{1}{2}\end{pmatrix}\begin{pmatrix} 1 & 0\\-4 & 1\end{pmatrix}\begin{pmatrix} \frac{1}{2} & 0\\0 & 1\end{pmatrix}\begin{pmatrix} 2 & 3 & 6\\4 & 8 & 14\end{pmatrix} = \begin{pmatrix} 1 & 0 & \frac{3}{2}\\0 & 1 & 1\end{pmatrix}\).

(iii) \(B = \begin{pmatrix} 1 & -\frac{3}{2}\\0 & 1\end{pmatrix}\begin{pmatrix} 1 & 0\\0 & \frac{1}{2}\end{pmatrix}\begin{pmatrix} 1 & 0\\-4 & 1\end{pmatrix}\begin{pmatrix} \frac{1}{2} & 0\\0 & 1\end{pmatrix} = \begin{pmatrix} 2 & -\frac{3}{4}\\-1 & \frac{1}{2}\end{pmatrix}\) and

\[BA = \begin{pmatrix} 2 & -\frac{3}{4}\\-1 & \frac{1}{2}\end{pmatrix} \begin{pmatrix}2 & 3 & 6\\4 & 8 & 14\end{pmatrix} = \begin{pmatrix} 1 & 0 & \frac{3}{2}\\0 & 1 & 1\end{pmatrix}.\]

3. A square matrix is said to be diagonal if its only non-zero entries lie on the main diagonal of the matrix. Prove that the diagonal matrix \(A = \begin{pmatrix} a & 0\\0 & b\end{pmatrix}\) has an inverse if and only if both \(a, b\) are non-zero. In this case, find \(A^{-1}\).

Solution. Suppose both \(a,b \not = 0\). Then \(\begin{pmatrix} \frac{1}{a} & 0\\0 & \frac{1}{b}\end{pmatrix} \begin{pmatrix} a & 0\\0 & b\end{pmatrix} = \begin{pmatrix} 1 & 0\\0 & 1\end{pmatrix} = \begin{pmatrix} a & 0\\0 & b\end{pmatrix} \begin{pmatrix} \frac{1}{a} & 0\\0 & \frac{1}{b}\end{pmatrix}\), so \(A\) is invertible with \(A^{-1} = \begin{pmatrix} \frac{1}{a} & 0\\0 & \frac{1}{b}\end{pmatrix}\). Conversely, since \(A\) has an inverse if and only if \(\det A \not = 0\), if \(A\) has an inverse, then \(ab \not = 0\), so both \(a\) and \(b\) are non-zero.

Bonus Problem 2. Let \(A\) be a \(2\times 2\) matrix. Prove that \(A\) is invertible if there exists a \(2\times 2\) matrix \(H\) such that \(HA = I_2\) or there exists a \(2\times 2\) matrix \(L\) such that \(AL = I_2\). Be sure to verify both scenarios. This problem shows that just one of the conditions in the definition of invertibility is required for a \(2\times 2\) matrix to be invertible. Due Tuesday, February 10. (5 points.)

Solution. If \(HA = I_2\), then \(1 = \det I_2 = \det HA = (\det H)\cdot (\det A)\), so \(\det A\not = 0\). Thus, we may form \(\begin{pmatrix} \frac{d}{\rho} & -\frac{b}{\rho}\\-\frac{c}{\rho} & \frac{a}{\rho}\end{pmatrix}\), with \(\rho = \det A\), which we know to be \(A^{-1}\). The converse is similar.

Bonus Problem 3. Elementary matrices are defined for larger square matrices by applying elementary row operations to an identity matrix. Convert the elementary row operations you used in Problem 1 above to write the inverse you found as a product of elementary matrices. Be sure to check your answer. Again, be careful with the order in which you take the product of elementary matrices. Due February 10. (5 points)

Solution. We have

\[\scriptsize\begin{pmatrix} 1 & 0 & 1\\0 & 1 & 0\\0 & 0 & 1\end{pmatrix}\begin{pmatrix} 1 & 0 & 0\\0 & 1 & 0\\0 & 0 & \frac{1}{2}\end{pmatrix} \begin{pmatrix} 1 & -2 & 0\\0 & 1 & 0\\0 & 0 & 1\end{pmatrix}\begin{pmatrix}1 & 0 & 0\\0 & 1 & 0\\0 & 3 & 1\end{pmatrix}\begin{pmatrix}1 & 0 & 0\\0 & \frac{1}{4} & 0\\0 & 0 & 1\end{pmatrix}\begin{pmatrix} 1 & 0 & 0\\0 & 1 & 0\\-2 & 0 & 1\end{pmatrix} \begin{pmatrix} 0 & 1 & 0\\1 & 0 & 0\\0 & 0 & 1\end{pmatrix} = A^{-1}.\]

You should check the details!

Tuesday, February 10

1. Show that the vectors \(v_1 = (2,3)\) and \(v_2 = (3,2)\) are linearly independent and then write \(w = (13, 12)\) as a linear combination of \(v_1\) and \(v_2\).

Solution. Since \(\textrm{det}\begin{pmatrix} 2 & 3\\3 & 2 \end{pmatrix}= -5 \neq 0\), \(v_1, v_2\) are linearly independent. To write \(w\) as a linear combination of \(v_1, v_2\) we must solve the system obtained from the vector equation \((13,12) = x(2,3)+y(3,2)\), i.e.,

\[\begin{align*} 2x+3y &= 13\\ 3x+2y &= 12. \end{align*}\]

Starting with the augmented matrix \(\begin{bmatrix}2 & 3 & | & 13\\3 & 2 & | & 12\end{bmatrix}\), Gaussian elimination yields \(x = 2, y = 3\), i.e., \(w = 2v_1+3v_2\).

2. Show that the vectors \(v_1 = (1,1), v_2 = (2,1), v_3 = (6,4)\) are not linearly independent by finding real numbers \(\alpha, \beta, \gamma \in \mathbb{R}\), not all zero, such that \(\alpha v_1+\beta v_2+\gamma v_3 = \vec{0}\).

Solution. One seeks a non-trivial solution to the vector equation \(x(1,1)+y(2,1)+z(6,4) = \vec{0}\), i.e., a non-zero solution to the system of equations

\[\begin{align*} x+2y+6z &= 0\\ x+y+4z &= 0. \end{align*}\]

This can be done using Gaussian elimination, but a close inspection shows that \(2v_1+2v_2+(-1)v_3 = \vec{0}\). There are in fact, infinitely many solutions to the system of equations above, all multiples of the vector \((2,2,-1)\).

3. Consider the line through the origin in \(\mathbb{R}^2\) given by \(6x-7y = 0\). Suppose the vectors \(v_1 = (u, v)\) and \(v_2 = (r,s)\) lie on the line. Show that: (i) The vector \(v_1+v_2\) lies on the line and (ii) The vector \(\lambda v_1\) lies on the line, for all \(\lambda \in \mathbb{R}\). Thus the set of vectors in \(\mathbb{R}^2\) lying on this line is closed under addition and scalar multiplication.

Solution. \(v_1+v_2 = (u+r, v+s)\), substituting gives \(6(u+r)-7(v+s) = (6u-7v)+(6r-7s) = 0 + 0 = 0\), so \(v_1+v_2\) lies on the line. \(\lambda v_1 = (\lambda u, \lambda v)\). Substituting gives \(6(\lambda u) - 7(\lambda v) = \lambda (6u-7v) = \lambda\cdot 0 = 0\), so \(\lambda v_1\) lies on the line.

Thursday, February 12

A subset \(W\subseteq \mathbb{R}^2\) is a subspace of \(\mathbb{R}^2\) if it is closed under vector addition and scalar multiplication.

1. Verify that the line \(2x+3y = 0\) is a subspace of \(\mathbb{R}^2\), but the line \(2x+3y = 1\) is not a subspace of \(\mathbb{R}^2\).

Solution. To see that the line \(2x+3y = 0\) is a subspace of \(\mathbb{R}^2\), one proceeds exactly like in Problem 3 from the previous assignment, i.e., assume the vectors \(v_1 = (u, v)\) and \(v_2 = (r,s)\) lie on the line, then show that: (i) The vector \(v_1+v_2\) lies on the line and (ii) The vector \(\lambda v_1\) lies on the line, for all \(\lambda \in \mathbb{R}\). The proof is almost exactly the same, just the coefficients in the equation of the line are different.

To see that the line \(2x+3y = 1\) is not a subspace, note that the vector \((1,-1)\) is on the line, but its multiple \(2(1,-1) = (2,-2)\) is not on the line.

2. Show directly from the definition of subspace that any subspace of \(\mathbb{R}^2\) must contain \((0,0)\).

Solution. Take \(v\in \mathbb{R}^2\) belonging to the subspace \(W\). Then, \(v+ (-1)\cdot v = v+(-v) = \vec{0} \in W\).

3. Define the function \(T:\mathbb{R}^2\to \mathbb{R}^2\) by the equation \(T(x,y) = (-2x+y, x+4y)\). Thus for example, if \(v = (3,2)\), then \(T(v) = T(3,2) = (-2\cdot 3+2, 3+4\cdot 2) = (-4,11)\). Suppose \(v = (a,b)\) and \(w = (c,d)\). Show that:

  1. (i) \(T(v+w) = T(v)+T(w)\)
  2. (ii) \(T(\lambda v) = \lambda T(v)\), for \(\lambda \in \mathbb{R}\).

A function with properties (i) and (ii) is called a linear transformation.

Solution. For (i),

\[\begin{align*} T(v+w) &= T(a+c, b+d)\\ &= (-2(a+c)+(b+d), a+c+4(b+d))\\ &= (-2a-2c+b+d, a+c+4b+4d)\\ &= (-2a+b, a+4b)+(-2c+d,c+4d)\\ &= T(v)+T(w). \end{align*}\]

And for (ii),

\[T(\lambda v) = T(\lambda a, \lambda b) = (-2(\lambda a)+(\lambda b), \lambda a+ 4(\lambda b)) = \lambda (-2a+b, a+4b) = \lambda T(a,b).\]

Bonus Problem 4. First verify that \(\{\vec{0}\}\) and \(\mathbb{R}^2\) are subspaces of \(\mathbb{R}^2\) and then prove that lines through the origin are the only other subspaces of \(\mathbb{R}^2\). In other words, if \(W\) is a subset of \(\mathbb{R}^2\) and \(W\) is a subspace, then \(W\) is \(\{\vec{0}\}, \mathbb{R}^2\) or a line through the origin. Due Tuesday, February 17. (5 points)

Solution. It is easy to check that \(\{\vec{0}\}\) and \(\mathbb{R}^2\) are subspaces of \(\mathbb{R}^2\). Suppose \(W\) is a non-zero subspace. Let \(0 \neq w\) be a non-zero vector in \(W\). Suppose \(w = (a,b)\). Then \(w\) lies on the line \(L: bx-ay = 0\). We want to show that \(W\) is the line \(L\), assuming \(W\) is not \(\mathbb{R}^2\). Note that \((u,v)\) lies on \(L\) if and only if \((u,v)\) is a multiple of \(w\). To see this, on the one hand, if \((u,v) = tw\), then \((u,v) = (ta,tb)\) which satisfies the equation \(bx-ay = 0\). On the other hand, suppose \((u,v)\) lies on the line \(L\). Then \(bu-av = 0\). Suppose \(a \neq 0\). Then \(v = \frac{b}{a} u\), so \((u, v) = (u, \frac{b}{a}u) = \frac{u}{a}(a,b)\), showing \((u,v)\) is a multiple of \(w\). The argument is similar if \(b\neq 0\). Now, suppose there is a vector \(h\) in \(W\) not on the line \(L\). Then \(w, h\) are linearly independent vectors and therefore \(\langle w, h\rangle = \mathbb{R}^2\). But \(\mathbb{R}^2 = \langle w,h\rangle \subseteq W\) shows that \(W = \mathbb{R}^2\), contrary to our assumption on \(W\). Thus, \(W = L\), as required.

Tuesday, February 17

1. For the linear transformation \(T\begin{pmatrix} x\\y\end{pmatrix} = \begin{pmatrix} 2x-3y\\-x+y\end{pmatrix}\), and bases for \(\mathbb{R}^2\) \(E := \{e_1, e_2\}\), \(B =\{ w_1, w_2\}\), with \(w_1 = \begin{pmatrix} 1\\1\end{pmatrix}, w_2 = \begin{pmatrix} 1\\2\end{pmatrix}\), calculate \([T]_E^E, [T]_B^E, [T]_E^B\) and \([T]_B^B\).

Solution. Each matrix is obtained by solving various systems of equations. We first calculate the values of \(T\) on the given basis elements: \(T(e_1) = (2, -1)\), \(T(e_2) = (-3,1)\), \(T(w_1) = (-1,0)\), \(T(w_2) = (-4,1)\).

We can read off \([T]_E^E = \begin{pmatrix} 2 & -3\\-1 & 1\end{pmatrix}\), since any vector \((a,b) = ae_1+be_2\). Similarly, we can write down the matrix \([T]_B^E = \begin{pmatrix} -1 & -4\\0 & 1\end{pmatrix}\), since the values of \(T(w_1), T(w_2)\) are easily expressed in terms of \(e_1, e_2\).

For \([T]_E^B\), we have to express \(T(e_1) = (2,-1)\) and \(T(e_2) = (-3,1)\) as a linear combination of \(w_1, w_2\). In other words, we must solve the vector equations \((2,-1) = x(1,1)+y(1,2)\) and \((-3,1) = x(1,1)+y(1,2)\). These equations give rise to two systems of equations:

System A: \(x + y = 2,\ x + 2y = -1\)      System B: \(x + y = -3,\ x + 2y = 1\)

The solutions to the systems are \(x = 5, y = -3\) and \(x = -7, y = 4\). It follows that \([T]_E^B = \begin{pmatrix} 5 & -7\\-3 & 4\end{pmatrix}\).

Similarly, to calculate \([T]_B^B\), we must express \(T(w_1), T(w_2)\) as linear combinations of \(w_1, w_2\). In other words, we must solve the vector equations \((-1,0) = x(1,1)+y(1,2)\) and \((-4,1) = x(1,1)+y(1,2)\). Converting these to systems of equations and solving gives \(x = -2, y = 1\) for the first vector equation and \(x = -9, y = 5\) for the second. Therefore we have \([T]_B^B = \begin{pmatrix} -2 & -9\\1 & 5\end{pmatrix}\).

2. Let \(v_1, v_2\in \mathbb{R}^2\) be linearly independent. Thus, by a theorem from class, any vector \(w\in \mathbb{R}^2\) can be written as a linear combination of \(v_1, v_2\), i.e., \(w = av_1+bv_2\), for \(a,b\in \mathbb{R}\). Prove that the linear combination is unique, i.e., if \(w = cv_1+dv_2\), with \(c,d\in \mathbb{R}\) then \(a = c\) and \(b = d\). Note: This follows formally from our fundamental properties and the definition of linear independence, without having to assign coordinates to the vectors involved.

Solution. Suppose \(av_1+bv_2 = cv_1+dv_2\). Then \((a-c)v_1 +(b-d)v_2 = 0\). Since \(v_1, v_2\) are linearly independent, \(a-c = 0\) and \(b-d = 0\), i.e., \(a = c\) and \(b = d\), as required.

Thursday, February 19

1. Let \(T(x,y) = (2x-3y, -x+y)\), \(S(x,y) = (-y,x)\), \(\beta = \{(1,1), (1, 2)\}\), \(\gamma = \{(-1,1), (2,1)\}\). Verify the very important formula from today's lecture: \([ST]_E^{\gamma} = [S]_{\beta}^{\gamma}\cdot [T]_E^{\beta}\). You can use some of the calculations you have done in the previous homework.

Solution. We have the values of \(T(e_1), T(e_2)\) from the previous homework set. Now \(S(w_1) = S(1,1) = (-1,1)\) and \(S(w_2) = S(1,2) = (-2,1)\) and \(ST(e_1) = (1,2)\) and \(ST(e_2) = (-1,3)\).

The technique for calculating the indicated matrices consists in solving various systems of equations as in the previous homework set. Upon doing so, we obtain: \[[T]_E^\beta = \begin{pmatrix} 5 & -7\\-3 & 4\end{pmatrix}, \quad [S]_\beta^\gamma = \begin{pmatrix} 1 & \frac{4}{3}\\0 & -\frac{1}{3}\end{pmatrix}, \quad [ST]_E^\gamma = \begin{pmatrix} 1 & -\frac{5}{3}\\1 & -\frac{4}{3}\end{pmatrix}.\] And we also have \[\begin{pmatrix} 1 & -\frac{5}{3}\\1 & -\frac{4}{3}\end{pmatrix} = \begin{pmatrix} 1 & \frac{4}{3}\\0 & -\frac{1}{3}\end{pmatrix}\cdot \begin{pmatrix} 5 & -7\\-3 & 4\end{pmatrix},\] as required.

2. Using the notation from problem 1, verify the change of basis formula \([S]_{\beta}^{\beta} = [I_2]_{\gamma}^{\beta}\cdot [S]_{\gamma}^{\gamma}\cdot [I_2]_{\beta}^{\gamma}\).

Solution. Calculating as before yields: \([S]_{\beta}^{\beta} = \begin{pmatrix} -3 & -5\\2 & 2\end{pmatrix}\), \([S]_\gamma^\gamma = \begin{pmatrix} -\frac{1}{3} & \frac{5}{3}\\-\frac{2}{3} & \frac{1}{3}\end{pmatrix}\), \([I_2]_\beta^\gamma = \begin{pmatrix} \frac{1}{3} & 1\\\frac{2}{3} & 1\end{pmatrix}\) and \([I_2]_\gamma^\beta = \begin{pmatrix} -3 & 3\\2 & -1\end{pmatrix}\), and one easily checks that \([S]_{\beta}^{\beta} = [I_2]_{\gamma}^{\beta}\cdot [S]_{\gamma}^{\gamma}\cdot [I_2]_{\beta}^{\gamma}\).

Bonus Problem 5. Use the very important formula to prove that matrix multiplication of \(2\times 2\) matrices is associative. Hint: First show that if \(A\) is a \(2\times 2\) matrix, then there exists \(T: \mathbb{R}^2\to \mathbb{R}^2\) such that \([T]^E_E = A\). Due Tuesday, February 24. (5 points)

Solution. Suppose \(A = \begin{pmatrix} a & c\\b & d\end{pmatrix}\). Define \(T(e_1) = (a,b)\) and \(T(e_2) = (c,d)\). Then \([T]_E^E = A\).

Now, let \(A, B, C\) be \(2\times 2\) matrices with entries in \(\mathbb{R}\) and \(T, S, U\) linear transformations from \(\mathbb{R}^2\) to \(\mathbb{R}^2\) such that \([T]_E^E = A\), \([S]_E^E = B\), \([U]_E^E = C\). Then by the very important formula and the fact that \(T(SU) = (TS)U\), we have \[\begin{aligned} A(BC) &= [T]_E^E\cdot ([S]_E^E[U]_E^E) = [T]_E^E\cdot [SU]_E^E = [T(SU)]_E^E \\ &= [(TS)U]_E^E = [TS]_E^E\cdot [U]_E^E = ([T]_E^E\cdot [S]_E^E)\cdot [U]_E^E = (AB)C. \end{aligned}\]

Tuesday, March 3

1. Show that the following matrices are diagonalizable by first finding their eigenvectors and eigenvalues: \(A = \begin{pmatrix} 1 & 4\\2 & 3\end{pmatrix}\) and \(B = \begin{pmatrix} 7 & 2\\-4 & 1\end{pmatrix}\).

Solution. We have \(p_A(x) = \det \begin{pmatrix} -x+1 & 4\\2 & -x+3 \end{pmatrix} = (x-1)(x-3)-8 = x^2-4x-5 = (x-5)(x+1)\), so 5, \(-1\) are the eigenvalues.

For 5: We solve the homogeneous system with coefficient matrix \(\begin{pmatrix} -4 & 4\\2 & -2\end{pmatrix}\). Gaussian elimination reduces this to \(\begin{pmatrix} 1 & -1\\0 & 0\end{pmatrix}\), so the solution set has one parameter, and consists of all multiples of the eigenvector \(v_1 = \begin{pmatrix} 1\\1\end{pmatrix}\).

For \(-1\): We solve the homogeneous system with coefficient matrix \(\begin{pmatrix} 2 & 4\\2 & 4\end{pmatrix}\). Gaussian elimination reduces this to \(\begin{pmatrix} 1 & 2\\0 & 0\end{pmatrix}\), so the solution set has one parameter, and consists of all multiples of the eigenvector \(v_2 = \begin{pmatrix} 2\\-1\end{pmatrix}\).

We take \(P = \begin{pmatrix} 1 & 2\\1 & -1\end{pmatrix}\), which gives \(P^{-1} = \begin{pmatrix} \frac{1}{3} & \frac{2}{3}\\\frac{1}{3} & -\frac{1}{3}\end{pmatrix}\), so that \(P^{-1}AP = \begin{pmatrix} 5 & 0\\0 & -1\end{pmatrix}\).

For the matrix \(B\), we have \(p_B(x) = \det \begin{pmatrix} -x+7 & 2\\-4 & -x+1\end{pmatrix} = (x-1)(x-7)+8 = x^2-8x+15 = (x-3)(x-5)\), so the eigenvalues are 3, 5.

For 5: We solve the homogeneous system with coefficient matrix \(\begin{pmatrix} 2 & 2\\-4 & -4\end{pmatrix}\). Gaussian elimination reduces this to \(\begin{pmatrix} 1 & 1\\0 & 0\end{pmatrix}\), so the solution set has one parameter, and consists of all multiples of the eigenvector \(v_1 = \begin{pmatrix} 1\\-1\end{pmatrix}\).

For 3: We solve the homogeneous system with coefficient matrix \(\begin{pmatrix} 4 & 2\\-4 & -2\end{pmatrix}\). Gaussian elimination reduces this to \(\begin{pmatrix} 2 & 1\\0 & 0\end{pmatrix}\), so the solution set has one parameter, and consists of all multiples of the eigenvector \(v_2 = \begin{pmatrix} 1\\-2\end{pmatrix}\).

We take \(P = \begin{pmatrix} 1 & 1\\-1 & -2\end{pmatrix}\), which gives \(P^{-1} = \begin{pmatrix} 2 & 1\\-1 & -1\end{pmatrix}\), so that \(P^{-1}BP = \begin{pmatrix} 5 & 0\\0 & 3\end{pmatrix}\).

2. A key step in the diagonalizability of the matrix \(A\) is that there should be a basis for \(\mathbb{R}^2\) consisting of eigenvectors of \(A\). For the matrix \(A = \begin{pmatrix}1 & 2\\0 & 1\end{pmatrix}\), find the eigenvectors and eigenvalues and show that there is no basis for \(\mathbb{R}^2\) consisting of eigenvectors of \(A\).

Solution. We have \(p_A(x) = \det \begin{pmatrix} -x+1 & 2\\0 & -x+1\end{pmatrix} = (-x+1)^2\), so that 1 is a repeated root of \(p_A(x)\), and is the only eigenvalue. To find eigenvectors associated to 1, we solve the homogeneous system with coefficient matrix \(\begin{pmatrix} 0 & 2\\0 & 0\end{pmatrix}\). Gaussian elimination reduces this to \(\begin{pmatrix} 0 & 1\\0 & 0\end{pmatrix}\), so the solution set has one parameter, and consists of all multiples of the eigenvector \(v_1 = \begin{pmatrix} 1\\0\end{pmatrix}\). Thus, the matrix \(A\) does not have a second eigenvector linearly independent from \(v_1\).

Bonus Problem 6. A \(2\times 2\) matrix \(A\) is a scalar matrix if \(A = \begin{pmatrix} \lambda & 0\\0 & \lambda\end{pmatrix} = \lambda\cdot I_2\), for some \(\lambda \in \mathbb{R}\). Show that:

  1. (i) If \(A\in \mathrm{M}_2(\mathbb{R})\) is a scalar matrix then \(AB = BA\), for all \(B\in \mathrm{M}_2(\mathbb{R})\).
  2. (ii) Prove that if \(A \in \mathrm{M}_2(\mathbb{R})\) is diagonalizable and \(P^{-1}AP\) is a scalar matrix, then \(A\) was already a scalar matrix.
This bonus problem is due Tuesday March 10 and is worth 5 points.

Solution. For (i), suppose \(B = \begin{pmatrix} a & b\\c & d\end{pmatrix}\). Then \(\begin{pmatrix} a & b\\c & d\end{pmatrix} \cdot \begin{pmatrix} \lambda & 0\\0 & \lambda\end{pmatrix} = \begin{pmatrix} \lambda a & \lambda b\\\lambda c & \lambda d\end{pmatrix} = \begin{pmatrix} \lambda & 0\\0 & \lambda\end{pmatrix} \cdot \begin{pmatrix} a & b\\c & d\end{pmatrix}\).

For (ii), suppose \(P^{-1}AP = \begin{pmatrix} \lambda & 0\\0 & \lambda\end{pmatrix}\). Then, using (i) and multiplying on the left by \(P\) and on the right by \(P^{-1}\) we have

\[A = P(P^{-1}AP)P^{-1} = P\begin{pmatrix} \lambda & 0\\0 & \lambda\end{pmatrix} P^{-1} = \begin{pmatrix} \lambda & 0\\0 & \lambda\end{pmatrix} PP^{-1} = \begin{pmatrix} \lambda & 0\\0 & \lambda \end{pmatrix}.\]
Thursday, March 5

1. For the matrix \(A = \begin{pmatrix} 1 & 0 & 0\\0 & 0 & 9\\0 & 1 & 0\end{pmatrix}\), find the eigenvalues of \(A\), the corresponding eigenvectors, and a diagonalizing matrix \(P\). Be sure to check that \(P^{-1}AP\) is a diagonal matrix. Note the process here is the same as for \(2\times 2\) matrices. First find the roots of the characteristic polynomial \(p_A(x) = \det (A-xI_3)\) and then find the corresponding eigenvectors as before; namely if \(\alpha\) is an eigenvalue, solve the homogeneous system of equations whose coefficient matrix is \(A-\alpha I_3\).

Solution. We have

\[\begin{aligned} p_A(x) &= \det \begin{pmatrix} -x+1 & 0 & 0\\0 & -x & 9\\0 & 1 & -x\end{pmatrix}\\ &= (-x+1)\cdot \det \begin{pmatrix} -x & 9\\1 & -x\end{pmatrix}\\ &= (-x+1)(x^2-9) = (-x+1)(x-3)(x+3), \end{aligned}\]

so the eigenvalues of \(A\) are \(1, 3, -3\).

For 1: We solve the homogeneous system with coefficient matrix \(\begin{pmatrix} 0 & 0 & 0\\0 & -1 & 9\\0 & 1 & -1\end{pmatrix}\). Gaussian elimination reduces this to \(\begin{pmatrix} 0 & 1 & 0\\0 & 0 & 1\\0 & 0 & 0\end{pmatrix}\), so the solution set has one parameter and consists of all multiples of the eigenvector \(v_1 = \begin{pmatrix} 1\\0\\0\end{pmatrix}\).

For 3: We solve the homogeneous system with coefficient matrix \(\begin{pmatrix} -2 & 0 & 0\\0 & -3 & 9\\0 & 1 & -3\end{pmatrix}\). Gaussian elimination reduces this to \(\begin{pmatrix} 1 & 0 & 0\\0 & 1 & -3\\0 & 0 & 0\end{pmatrix}\), so the solution set has one parameter and consists of all multiples of the eigenvector \(v_2 = \begin{pmatrix} 0\\3\\1\end{pmatrix}\).

For \(-3\): We solve the homogeneous system with coefficient matrix \(\begin{pmatrix} 4 & 0 & 0\\0 & 3 & 9\\0 & 1 & 3\end{pmatrix}\). Gaussian elimination reduces this to \(\begin{pmatrix} 1 & 0 & 0\\0 & 1 & 3\\0 & 0 & 0\end{pmatrix}\), so the solution set has one parameter and consists of all multiples of the eigenvector \(v_3 = \begin{pmatrix} 0\\3\\-1\end{pmatrix}\).

We take \(P = \begin{pmatrix} 1 & 0 & 0\\0 & 3 & 3\\0 & 1 & -1\end{pmatrix}\). Using Gaussian elimination, we find that \(P^{-1} = \begin{pmatrix} 1 & 0 & 0\\0 & \frac{1}{6} & \frac{1}{2}\\0 & \frac{1}{6} & -\frac{1}{2}\end{pmatrix}\), so that \(P^{-1}AP = \begin{pmatrix} 1 & 0 & 0\\0 & 3 & 0\\0 & 0 & -3\end{pmatrix}\).

2. Suppose \(A, B, P\in \textrm{M}_2(\mathbb{R})\) satisfy \(B = P^{-1}AP\). Show that \(p_B(x) = p_A(x)\), i.e., \(A\) and \(B\) have the same characteristic polynomial. Hint: \(x I_2 = xP^{-1}P\).

Solution. We have

\[\begin{aligned} p_B(x) &= \det (P^{-1}AP-xI_2)\\ &= \det(P^{-1}AP - xP^{-1}P)\\ &= \det\{P^{-1}(A-xI_2)P\}\\ &= \det P^{-1}\cdot \det (A-xI_2)\cdot \det P\\ &= \det (A-xI_2)\\ &= p_A(x). \end{aligned}\]
Tuesday, March 10

1. Let \(A = \begin{pmatrix} 2 & 1\\1 & 2\end{pmatrix}\). Verify that \(v_1 = \begin{pmatrix} 1\\1\end{pmatrix}\) and \(v_2 = \begin{pmatrix} 1\\-1\end{pmatrix}\) are eigenvectors of \(A\) with eigenvalues 3 and 1 respectively.

Solution. Straightforward.

2. In preparation for Thursday's lecture, verify that \(\begin{pmatrix} x_1(t)\\x_2(t)\end{pmatrix} = c_1e^{3t}\begin{pmatrix} 1\\1\end{pmatrix} + c_2e^{t} \begin{pmatrix} 1\\-1\end{pmatrix}\), equivalently, \(x_1(t) = c_1e^{3t}+c_2e^t\) and \(x_2(t) = c_1e^{3t}-c_2e^t\) is a solution to the system of differential equations,

\[\begin{align*} x_1'(t) &= 2x_1(t)+x_2(t)\\ x_2'(t) &= x_1(t)+2x_2(t). \end{align*}\]

Solution. On the one hand, for the given values of \(x_1(t), x_2(t)\), we have \(x_1'(t) = 3c_1e^{3t}+c_2e^t\) and \(x_2'(t) = 3c_1e^{3t}-c_2e^{t}\). On the other hand,

\[2x_1(t)+x_2(t) = 2(c_1e^{3t}+c_2e^t)+(c_1e^{3t}-c_2e^t) = 3c_1e^{3t}+c_2e^t = x_1'(t)\]

and

\[x_1(t)+2x_2(t) = (c_1e^{3t}+c_2e^t)+2(c_1e^{3t}-c_2e^t) = 3c_1e^{3t}-c_2e^t = x_2'(t),\]

which verifies that the given solutions satisfy the system of differential equations.

3. Given the solutions to the system of differential equations in the previous problem, solve the initial condition \(\begin{pmatrix} x_1(0)\\x_2(0)\end{pmatrix} = \begin{pmatrix} 3\\-4\end{pmatrix}\).

Solution. \(3 = x_1(0) = c_1+c_2\) and \(-4 = x_2(0) = c_1-c_2\). Solving this system gives \(c_1 = -1/2\) and \(c_2 = 7/2\). Thus, \(x_1(t) = (-1/2)e^{3t}+(7/2)e^t\) and \(x_2(t) = (-1/2)e^{3t}-(7/2)e^t\).

Thursday, March 12

1. For the system of first order linear differential equations

\[\begin{align*} x_1'(t) &= 5x_1(t)-3x_2(t)\\ x_2'(t) &= -6x_1(t)+2x_2(t) \end{align*}\]

first find the eigenvalues and corresponding eigenvectors for the coefficient matrix \(A = \begin{pmatrix} 5 & -3\\-6 & 2\end{pmatrix}\), then, for the given system, follow step-by-step the derivation of the solution to the system given in class. After writing the general solution, write the solution to the system with initial conditions \(x_1(0) = 2, x_2(0) = \sqrt{5}\).

Solution. It is straightforward to check that the eigenvalues of \(A\) are \(-1, 8\) with eigenvectors \(\begin{pmatrix} 1\\2\end{pmatrix}\) and \(\begin{pmatrix} 1\\-1\end{pmatrix}\) respectively. Thus we get \(P = \begin{pmatrix} 1 & 1\\2 & -1\end{pmatrix}\) and \(P^{-1} = -\frac{1}{3}\cdot \begin{pmatrix} -1 & -1\\-2 & 1\end{pmatrix}\). Set \(X(t) = \begin{pmatrix} x_1(t)\\x_2(t)\end{pmatrix}\) and \(X'(t) = \begin{pmatrix} x_1'(t)\\x_2'(t)\end{pmatrix}\), so that the system may be written as \(X'(t) = AX(t)\). We also set

\[W(t) = \begin{pmatrix} w_1(t)\\w_2(t)\end{pmatrix} = P^{-1}X(t) = -\frac{1}{3}\begin{pmatrix} -x_1(t)-x_2(t)\\-2x_1(t)+x_2(t)\end{pmatrix}.\]

We also have \(X(t) = PW(t)\), so that the system becomes \(APW(t) = PW'(t)\), thus \(P^{-1}AP\,W(t) = W'(t)\), and \(\begin{pmatrix} -1 & 0\\0 & 8\end{pmatrix}\begin{pmatrix} w_1(t)\\w_2(t)\end{pmatrix} = \begin{pmatrix} w_1'(t)\\w_2'(t)\end{pmatrix}\). This splits into \(w_1'(t) = -w_1(t)\) and \(w_2'(t) = 8w_2(t)\), so \(w_1(t) = c_1e^{-t}\) and \(w_2(t) = c_2e^{8t}\). Therefore,

\[X(t) = PW(t) = \begin{pmatrix} 1 & 1\\2 & -1\end{pmatrix}\begin{pmatrix} c_1e^{-t}\\c_2e^{8t}\end{pmatrix} = c_1e^{-t}\begin{pmatrix} 1\\2\end{pmatrix} + c_2e^{8t}\begin{pmatrix} 1\\-1\end{pmatrix}.\]

In particular \(x_1(t) = c_1e^{-t}+c_2e^{8t}\) and \(x_2(t) = 2c_1e^{-t}-c_2e^{8t}\). For the initial conditions: \(2 = x_1(0) = c_1+c_2\) and \(\sqrt{5} = x_2(0) = 2c_1-c_2\). Solving gives \(c_1 = \frac{\sqrt{5}+2}{3}\) and \(c_2 = \frac{4-\sqrt{5}}{3}\). Thus the solution is

\[x_1(t) = \frac{\sqrt{5}+2}{3}e^{-t}+\frac{4-\sqrt{5}}{3}e^{8t}\quad\quad \text{and}\quad\quad x_2(t) = \frac{2(\sqrt{5}+2)}{3}e^{-t}-\frac{(4-\sqrt{5})}{3}e^{8t}.\]

2. Use the fact that for any matrix \(A\), \((P^{-1}AP)^n = P^{-1}A^nP\) to find \(A^{99}\), for the matrix \(A\) in the problem above. You should use exponents in your answer. Hint: Use the fact that \(A\) is diagonalizable.

Solution. We write \(A = P\begin{pmatrix} -1 & 0\\0 & 8\end{pmatrix}P^{-1}\), so that

\[\begin{aligned} A^{99} &= P\begin{pmatrix} -1 & 0\\0 & 8\end{pmatrix}^{99}P^{-1}\\ &= P\begin{pmatrix} (-1)^{99} & 0\\0 & 8^{99}\end{pmatrix}P^{-1}\\ &= \begin{pmatrix} 1 & 1\\2 & -1\end{pmatrix}\begin{pmatrix} -1 & 0\\0 & 8^{99}\end{pmatrix}\left(-\frac{1}{3}\begin{pmatrix} -1 & -1\\-2 & 1\end{pmatrix}\right)\\ &= -\frac{1}{3}\begin{pmatrix} -1-2\cdot 8^{99} & -1+8^{99}\\2+2\cdot 8^{99} & 2-8^{99}\end{pmatrix}. \end{aligned}\]

Bonus Problem 7. Let \(D = \begin{pmatrix} \alpha & 0\\0 & \beta\end{pmatrix}\). Use the formula \(e^{x} = 1+x+\frac{1}{2!}x^2+\frac{1}{3!}x^3 + \cdots\) to find an expression for \(e^{D}\). Then use the ideas in problem 2 above to find \(e^A\), for \(A = \begin{pmatrix} 5 & -3\\-6 & 2\end{pmatrix}\). This bonus problem is due Tuesday, March 24 and is worth 5 points.

Solution. Substituting \(D\) into the formula for \(e^x\) we get

\[\begin{aligned} e^{D} &= I_2+D+\frac{1}{2!}D^2+\cdots\\ &= I_2+\begin{pmatrix} \alpha & 0\\0 & \beta\end{pmatrix}+\frac{1}{2!}\begin{pmatrix} \alpha^2 & 0\\0 & \beta^2\end{pmatrix}+\cdots\\ &= \begin{pmatrix} e^{\alpha} & 0\\0 & e^{\beta}\end{pmatrix}. \end{aligned}\]

When \(A = PDP^{-1}\), substituting into the formula for \(e^x\) gives

\[\begin{aligned} e^A &= I_2+A+\frac{1}{2!}A^2+\cdots\\ &= I_2+(PDP^{-1})+\frac{1}{2!}(PD^2P^{-1})+\cdots\\ &= P\left(I_2+D+\frac{1}{2!}D^2+\cdots\right)P^{-1}\\ &= Pe^DP^{-1}. \end{aligned}\]

From Problem 1, \(D = \begin{pmatrix} -1 & 0\\0 & 8\end{pmatrix}\), so \(e^D = \begin{pmatrix} e^{-1} & 0\\0 & e^8\end{pmatrix}\). Using \(P = \begin{pmatrix} 1 & 1\\2 & -1\end{pmatrix}\) and \(P^{-1} = -\frac{1}{3}\begin{pmatrix} -1 & -1\\-2 & 1\end{pmatrix}\) in the formula \(e^A = Pe^DP^{-1}\), we get \(e^A = -\frac{1}{3}\begin{pmatrix} -e^{-1}-2e^8 & -e^{-1}+e^8\\-2e^{-1}+2e^8 & -2e^{-1}-e^8\end{pmatrix}\).

Tuesday, March 24

1. Let \(A\) be the \(3\times 3\) matrix presented in the first problem from the homework assignment of February 5. Write \(X(t) = \begin{pmatrix} x_1(t)\\x_2(t)\\x_3(t)\end{pmatrix}\), so we have a system of first order linear differential equations given by \(X'(t) = A\cdot X(t)\) with initial conditions \(x_1(0) = 3, x_2(0) = 2, x_3(0) = -1\). Solve this system of equations and express your answer using the exponential of a matrix.

Solution. The matrix from the February 5 assignment is \(A = \begin{pmatrix} 2 & 1 & 0\\0 & 4 & 0\\1 & 2 & -1\end{pmatrix}\). The characteristic polynomial is

\[\begin{aligned} p_A(x) &= \det\begin{pmatrix} 2-x & 1 & 0\\0 & 4-x & 0\\1 & 2 & -1-x\end{pmatrix}\\ &= (2-x)\det\begin{pmatrix} 4-x & 0\\2 & -1-x\end{pmatrix} - 1\cdot\det\begin{pmatrix} 0 & 0\\1 & -1-x\end{pmatrix}\\ &= (2-x)(4-x)(-1-x)\\ &= -(x-2)(x-4)(x+1), \end{aligned}\]

so the eigenvalues are \(2, 4, -1\).

For \(\lambda = 2\): Row-reducing \(A - 2I_3 = \begin{pmatrix} 0 & 1 & 0\\0 & 2 & 0\\1 & 2 & -3\end{pmatrix}\) gives \(\begin{pmatrix} 1 & 0 & -3\\0 & 1 & 0\\0 & 0 & 0\end{pmatrix}\), so the eigenvector is \(v_1 = \begin{pmatrix} 3\\0\\1\end{pmatrix}\).

For \(\lambda = 4\): Row-reducing \(A - 4I_3 = \begin{pmatrix} -2 & 1 & 0\\0 & 0 & 0\\1 & 2 & -5\end{pmatrix}\) gives \(\begin{pmatrix} 1 & 0 & -1\\0 & 1 & -2\\0 & 0 & 0\end{pmatrix}\), so the eigenvector is \(v_2 = \begin{pmatrix} 1\\2\\1\end{pmatrix}\).

For \(\lambda = -1\): Row-reducing \(A + I_3 = \begin{pmatrix} 3 & 1 & 0\\0 & 5 & 0\\1 & 2 & 0\end{pmatrix}\) gives \(\begin{pmatrix} 1 & 0 & 0\\0 & 1 & 0\\0 & 0 & 0\end{pmatrix}\), so the eigenvector is \(v_3 = \begin{pmatrix} 0\\0\\1\end{pmatrix}\).

The general solution to \(X'(t) = AX(t)\) is

\[X(t) = c_1 e^{2t}\begin{pmatrix} 3\\0\\1\end{pmatrix} + c_2 e^{4t}\begin{pmatrix} 1\\2\\1\end{pmatrix} + c_3 e^{-t}\begin{pmatrix} 0\\0\\1\end{pmatrix}.\]

Applying the initial conditions \(x_1(0) = 3, x_2(0) = 2, x_3(0) = -1\), we get the system

\[\begin{aligned} 3c_1 + c_2 &= 3\\ 2c_2 &= 2\\ c_1 + c_2 + c_3 &= -1. \end{aligned}\]

From the second equation, \(c_2 = 1\). Substituting into the first gives \(c_1 = \frac{2}{3}\). The third equation then gives \(c_3 = -1 - \frac{2}{3} - 1 = -\frac{8}{3}\). Thus the solution with the given initial conditions is

\[X(t) = \frac{2}{3}e^{2t}\begin{pmatrix} 3\\0\\1\end{pmatrix} + e^{4t}\begin{pmatrix} 1\\2\\1\end{pmatrix} - \frac{8}{3}e^{-t}\begin{pmatrix} 0\\0\\1\end{pmatrix}.\]

Alternatively, using the diagonalizing matrix \(P = \begin{pmatrix} 3 & 1 & 0\\0 & 2 & 0\\1 & 1 & 1\end{pmatrix}\) satisfying \(P^{-1}AP = D = \begin{pmatrix} 2 & 0 & 0\\0 & 4 & 0\\0 & 0 & -1\end{pmatrix}\), with \(P^{-1} = \begin{pmatrix} \frac{1}{3} & -\frac{1}{6} & 0\\0 & \frac{1}{2} & 0\\-\frac{1}{3} & -\frac{1}{3} & 1\end{pmatrix}\), we compute \(e^{At} = Pe^{Dt}P^{-1}\) explicitly:

\[\begin{aligned} e^{At} &= \begin{pmatrix} 3 & 1 & 0\\0 & 2 & 0\\1 & 1 & 1\end{pmatrix}\begin{pmatrix} e^{2t} & 0 & 0\\0 & e^{4t} & 0\\0 & 0 & e^{-t}\end{pmatrix}\begin{pmatrix} \frac{1}{3} & -\frac{1}{6} & 0\\0 & \frac{1}{2} & 0\\-\frac{1}{3} & -\frac{1}{3} & 1\end{pmatrix}\\ &= \begin{pmatrix} 3e^{2t} & e^{4t} & 0\\0 & 2e^{4t} & 0\\e^{2t} & e^{4t} & e^{-t}\end{pmatrix}\begin{pmatrix} \frac{1}{3} & -\frac{1}{6} & 0\\0 & \frac{1}{2} & 0\\-\frac{1}{3} & -\frac{1}{3} & 1\end{pmatrix}\\ &= \begin{pmatrix} e^{2t} & \frac{e^{4t}-e^{2t}}{2} & 0\\0 & e^{4t} & 0\\\frac{e^{2t}-e^{-t}}{3} & \frac{3e^{4t}-e^{2t}-2e^{-t}}{6} & e^{-t}\end{pmatrix}. \end{aligned}\]

One can verify: setting \(t=0\) gives \(e^{A\cdot 0} = I_3\) \(\checkmark\). The solution with the given initial conditions can then be written as

\[X(t) = e^{At}X(0) = \begin{pmatrix} e^{2t} & \frac{e^{4t}-e^{2t}}{2} & 0\\0 & e^{4t} & 0\\\frac{e^{2t}-e^{-t}}{3} & \frac{3e^{4t}-e^{2t}-2e^{-t}}{6} & e^{-t}\end{pmatrix}\begin{pmatrix} 3\\2\\-1\end{pmatrix} = \begin{pmatrix} 2e^{2t}+e^{4t}\\2e^{4t}\\e^{4t}-\frac{8}{3}e^{-t}+\frac{2}{3}e^{2t}\end{pmatrix},\]

which agrees with the solution found above.

2. Let \(a_1 = 1, a_2 = 1, a_3 = 2, a_4 = 3, a_5 = 5, \ldots\) be the Fibonacci sequence and set \(A = \begin{pmatrix} 0 & 1\\1 & 1\end{pmatrix}\). Prove by induction on \(k\) that \(A^k\cdot \begin{pmatrix} 0\\1\end{pmatrix} = \begin{pmatrix} a_k\\a_{k+1}\end{pmatrix}\), for all \(k\geq 1\).

Solution. We proceed by induction on \(k\).

Base case (\(k=1\)): We have \(A^1\cdot\begin{pmatrix} 0\\1\end{pmatrix} = \begin{pmatrix} 0 & 1\\1 & 1\end{pmatrix}\begin{pmatrix} 0\\1\end{pmatrix} = \begin{pmatrix} 1\\1\end{pmatrix} = \begin{pmatrix} a_1\\a_2\end{pmatrix}\), as required.

Inductive step: Assume \(A^k\cdot\begin{pmatrix} 0\\1\end{pmatrix} = \begin{pmatrix} a_k\\a_{k+1}\end{pmatrix}\). Then

\[A^{k+1}\cdot\begin{pmatrix} 0\\1\end{pmatrix} = A\cdot A^k\cdot\begin{pmatrix} 0\\1\end{pmatrix} = \begin{pmatrix} 0 & 1\\1 & 1\end{pmatrix}\begin{pmatrix} a_k\\a_{k+1}\end{pmatrix} = \begin{pmatrix} a_{k+1}\\a_k+a_{k+1}\end{pmatrix} = \begin{pmatrix} a_{k+1}\\a_{k+2}\end{pmatrix},\]

where the last equality uses the recurrence \(a_{k+2} = a_k + a_{k+1}\). This completes the induction.

3. Let \(a_1, a_2, \ldots\) be the sequence defined by the equation \(A^k\cdot \begin{pmatrix} 0\\1\end{pmatrix} = \begin{pmatrix} a_k\\a_{k+1}\end{pmatrix}\), for \(A = \begin{pmatrix} 6 & 10\\-2 & -3\end{pmatrix}\) and \(k\geq 1\). Find the values of \(a_k\), for all \(k\).

Solution. We diagonalize \(A\). The characteristic polynomial is \(p_A(x) = (6-x)(-3-x)+20 = x^2-3x+2 = (x-1)(x-2)\), so the eigenvalues are \(1\) and \(2\).

For \(\lambda = 1\): Row-reducing \(A - I_2 = \begin{pmatrix} 5 & 10\\-2 & -4\end{pmatrix}\) gives eigenvector \(v_1 = \begin{pmatrix} 2\\-1\end{pmatrix}\).

For \(\lambda = 2\): Row-reducing \(A - 2I_2 = \begin{pmatrix} 4 & 10\\-2 & -5\end{pmatrix}\) gives eigenvector \(v_2 = \begin{pmatrix} 5\\-2\end{pmatrix}\).

We take \(P = \begin{pmatrix} 2 & 5\\-1 & -2\end{pmatrix}\), so that \(P^{-1}AP = \begin{pmatrix} 1 & 0\\0 & 2\end{pmatrix}\). Since \(\det P = 1\), we have \(P^{-1} = \begin{pmatrix} -2 & -5\\1 & 2\end{pmatrix}\). Therefore,

\[\begin{aligned} A^k &= P\begin{pmatrix} 1 & 0\\0 & 2^k\end{pmatrix}P^{-1}\\ &= \begin{pmatrix} 2 & 5\cdot 2^k\\-1 & -2\cdot 2^k\end{pmatrix}\begin{pmatrix} -2 & -5\\1 & 2\end{pmatrix}\\ &= \begin{pmatrix} -4+5\cdot 2^k & -10+10\cdot 2^k\\2-2\cdot 2^k & 5-4\cdot 2^k\end{pmatrix}. \end{aligned}\]

Thus,

\[A^k\begin{pmatrix} 0\\1\end{pmatrix} = \begin{pmatrix} -4+5\cdot 2^k & -10+10\cdot 2^k\\2-2\cdot 2^k & 5-4\cdot 2^k\end{pmatrix}\begin{pmatrix} 0\\1\end{pmatrix} = \begin{pmatrix} 10\cdot 2^k-10\\5-4\cdot 2^k\end{pmatrix} = \begin{pmatrix} a_k\\a_{k+1}\end{pmatrix},\]

so \(a_k = 10\cdot 2^k-10\) for all \(k\geq 1\). We verify: \(A\begin{pmatrix} 0\\1\end{pmatrix} = \begin{pmatrix} 10\\-3\end{pmatrix}\), so \(a_1 = 10 = 10\cdot 2 - 10\), and \(A^2\begin{pmatrix} 0\\1\end{pmatrix} = A\begin{pmatrix} 10\\-3\end{pmatrix} = \begin{pmatrix} 30\\-11\end{pmatrix}\), so \(a_2 = 30 = 10\cdot 4-10\).

Thursday, March 26

1. Find an orthogonal matrix that diagonalizes \(A = \begin{pmatrix} 5 & 4\\4 & -1\end{pmatrix}\). Be sure to check your answer.

Solution. The characteristic polynomial is \(p_A(x) = (5-x)(-1-x)-16 = x^2-4x-21 = (x-7)(x+3)\), so the eigenvalues are \(7\) and \(-3\).

For \(\lambda = 7\): Row-reducing \(A-7I_2 = \begin{pmatrix} -2 & 4\\4 & -8\end{pmatrix}\) gives eigenvector \(\begin{pmatrix} 2\\1\end{pmatrix}\), which we normalize to \(\frac{1}{\sqrt{5}}\begin{pmatrix} 2\\1\end{pmatrix}\).

For \(\lambda = -3\): Row-reducing \(A+3I_2 = \begin{pmatrix} 8 & 4\\4 & 2\end{pmatrix}\) gives eigenvector \(\begin{pmatrix} 1\\-2\end{pmatrix}\), which we normalize to \(\frac{1}{\sqrt{5}}\begin{pmatrix} 1\\-2\end{pmatrix}\).

Note that the two eigenvectors are orthogonal, as expected for a symmetric matrix with distinct eigenvalues. We take

\[Q = \frac{1}{\sqrt{5}}\begin{pmatrix} 2 & 1\\1 & -2\end{pmatrix}.\]

It is easy to check that in this case \(Q^{-1} = Q\), so we verify:

\[\begin{aligned} Q^{-1}AQ &= \frac{1}{\sqrt{5}}\begin{pmatrix} 2 & 1\\1 & -2\end{pmatrix}\begin{pmatrix} 5 & 4\\4 & -1\end{pmatrix}\frac{1}{\sqrt{5}}\begin{pmatrix} 2 & 1\\1 & -2\end{pmatrix}\\ &= \frac{1}{5}\begin{pmatrix} 14 & 7\\1 & -6\end{pmatrix}\begin{pmatrix} 2 & 1\\1 & -2\end{pmatrix}\\ &= \frac{1}{5}\begin{pmatrix} 35 & 0\\0 & -15\end{pmatrix} = \begin{pmatrix} 7 & 0\\0 & -3\end{pmatrix}. \end{aligned}\]

2. Suppose \(A\) is a \(2\times 2\) matrix with eigenvalue \(\lambda\). Suppose that \(v_1, v_2\) are two eigenvectors of \(A\) for \(\lambda\), i.e., \(Av_1 = \lambda v_1\) and \(Av_2 = \lambda v_2\). Prove that for any \(a, b\in \mathbb{R}\), \(av_1+bv_2\) is also an eigenvector of \(A\) for \(\lambda\).

Solution. Using linearity of matrix multiplication and the fact that \(v_1, v_2\) are eigenvectors for \(\lambda\), we have

\[A(av_1+bv_2) = aAv_1+bAv_2 = a\lambda v_1+b\lambda v_2 = \lambda(av_1+bv_2).\]

Thus \(av_1+bv_2\) is an eigenvector of \(A\) for \(\lambda\). \(\square\)

Bonus Problem 8. Let \(Q\) be a \(2\times 2\) orthogonal real matrix. Prove that \(Q^{-1} = Q^t\). This problem is due Tuesday March 31. (3 points)

Solution. By definition, \(Q\) is orthogonal if its columns \(C_1, C_2\) form an orthonormal basis for \(\mathbb{R}^2\), i.e., \(C_1, C_2\) have length one and are orthogonal. The transpose \(Q^t\) has rows \(C_1^t, C_2^t\). We check the entries of \(Q^tQ\): the \((1,1)\) entry is \(C_1^tC_1 = C_1\cdot C_1 = 1\), the \((1,2)\) entry is \(C_1^tC_2 = C_1\cdot C_2 = 0\), the \((2,1)\) entry is \(C_2^tC_1 = 0\), and the \((2,2)\) entry is \(C_2^tC_2 = 1\). Thus \(Q^tQ = I_2\), and therefore \(Q^{-1} = Q^t\).

Tuesday, March 31

1. Set \(A := \begin{pmatrix} 1 & 0 & 1\\0 & 1 & -1\\1 & -1 & 2\end{pmatrix}\), so that \(A\) is symmetric. We can go through the same process as for \(2\times 2\) matrices to see that \(A\) is orthogonally diagonalizable. The same process works in this case since \(A\) has three distinct eigenvalues.

  1. (i) First find three distinct eigenvalues for \(A\). This tells us that \(A\) is diagonalizable.
  2. (ii) Find one eigenvector for each eigenvalue. Call these vectors \(v_1, v_2, v_3\). Check that these three vectors are mutually orthogonal.
  3. (iii) Set \(u_1 := \frac{1}{||v_1||}\cdot v_1\), \(u_2 := \frac{1}{||v_2||}\cdot v_2\), \(u_3 := \frac{1}{||v_3||}\cdot v_3\) and let \(Q\) denote the \(3\times 3\) matrix whose columns are \(u_1, u_2, u_3\), so that \(Q\) is an orthogonal \(3\times 3\) matrix. Verify that \(Q^{-1}AQ\) is diagonal, so that \(A\) is orthogonally diagonalizable.
  4. (iv) Calculate \(Q^tQ\). You should get \(I_3\), which illustrates the property \(Q^t = Q^{-1}\) enjoyed by orthogonal matrices.

Solution.

(i) To find the eigenvalues, we compute the characteristic polynomial by expanding along the first row:

\[\begin{aligned} p_A(x) &= \det(A - xI_3) = \det\begin{pmatrix} 1-x & 0 & 1\\0 & 1-x & -1\\1 & -1 & 2-x\end{pmatrix}\\ &= (1-x)\det\begin{pmatrix} 1-x & -1\\-1 & 2-x\end{pmatrix} + 1\cdot\det\begin{pmatrix} 0 & 1-x\\1 & -1\end{pmatrix}\\ &= (1-x)\bigl[(1-x)(2-x)-1\bigr] - (1-x)\\ &= (1-x)\bigl[(1-x)(2-x) - 2\bigr]\\ &= (1-x)(x^2 - 3x)\\ &= -x(x-1)(x-3). \end{aligned}\]

The three distinct eigenvalues are \(\lambda_1 = 0\), \(\lambda_2 = 1\), \(\lambda_3 = 3\). Since \(A\) is a \(3\times 3\) matrix with three distinct eigenvalues, it is diagonalizable.

(ii) For \(\lambda_1 = 0\): Row-reducing \(A\):

\[\begin{pmatrix} 1 & 0 & 1\\0 & 1 & -1\\1 & -1 & 2\end{pmatrix} \xrightarrow{R_3 \leftarrow R_3 - R_1} \begin{pmatrix} 1 & 0 & 1\\0 & 1 & -1\\0 & -1 & 1\end{pmatrix} \xrightarrow{R_3 \leftarrow R_3 + R_2} \begin{pmatrix} 1 & 0 & 1\\0 & 1 & -1\\0 & 0 & 0\end{pmatrix},\]

giving eigenvector \(v_1 = \begin{pmatrix} -1\\1\\1\end{pmatrix}\).

For \(\lambda_2 = 1\): Row-reducing \(A - I_3\):

\[\begin{pmatrix} 0 & 0 & 1\\0 & 0 & -1\\1 & -1 & 1\end{pmatrix} \xrightarrow{} \begin{pmatrix} 1 & -1 & 1\\0 & 0 & 1\\0 & 0 & 0\end{pmatrix} \xrightarrow{R_1 \leftarrow R_1 - R_2} \begin{pmatrix} 1 & -1 & 0\\0 & 0 & 1\\0 & 0 & 0\end{pmatrix},\]

giving eigenvector \(v_2 = \begin{pmatrix} 1\\1\\0\end{pmatrix}\).

For \(\lambda_3 = 3\): Row-reducing \(A - 3I_3\):

\[\begin{pmatrix} -2 & 0 & 1\\0 & -2 & -1\\1 & -1 & -1\end{pmatrix} \xrightarrow{R_1 \leftrightarrow R_3} \begin{pmatrix} 1 & -1 & -1\\0 & -2 & -1\\-2 & 0 & 1\end{pmatrix} \xrightarrow{R_3 \leftarrow R_3 + 2R_1} \begin{pmatrix} 1 & -1 & -1\\0 & -2 & -1\\0 & -2 & -1\end{pmatrix}\] \[\xrightarrow{R_3 \leftarrow R_3 - R_2} \begin{pmatrix} 1 & -1 & -1\\0 & -2 & -1\\0 & 0 & 0\end{pmatrix},\]

giving (with \(x_3 = 2\)) eigenvector \(v_3 = \begin{pmatrix} 1\\-1\\2\end{pmatrix}\).

We verify mutual orthogonality:

\[\begin{aligned}v_1\cdot v_2 &= (-1)(1)+(1)(1)+(1)(0) = 0,\\v_1\cdot v_3 &= (-1)(1)+(1)(-1)+(1)(2) = 0,\\v_2\cdot v_3 &= (1)(1)+(1)(-1)+(0)(2) = 0.\end{aligned}\]

(iii) We normalize:

\[\|v_1\| = \sqrt{3},\quad \|v_2\| = \sqrt{2},\quad \|v_3\| = \sqrt{6},\]

so

\[u_1 = \frac{1}{\sqrt{3}}\begin{pmatrix} -1\\1\\1\end{pmatrix},\quad u_2 = \frac{1}{\sqrt{2}}\begin{pmatrix} 1\\1\\0\end{pmatrix},\quad u_3 = \frac{1}{\sqrt{6}}\begin{pmatrix} 1\\-1\\2\end{pmatrix},\]

and we set

\[Q = \begin{pmatrix} -\frac{1}{\sqrt{3}} & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{6}}\\[4pt] \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{6}}\\[4pt] \frac{1}{\sqrt{3}} & 0 & \frac{2}{\sqrt{6}}\end{pmatrix}.\]

Since the columns are orthonormal eigenvectors for eigenvalues \(0, 1, 3\), we have \(Q^{-1}AQ = \begin{pmatrix} 0 & 0 & 0\\0 & 1 & 0\\0 & 0 & 3\end{pmatrix}\).

(iv) The \((i,j)\) entry of \(Q^tQ\) is \(u_i\cdot u_j\). Since the columns are orthonormal, \(u_i\cdot u_j = \delta_{ij}\), so \(Q^tQ = I_3\).

2. Follow the steps below for the matrix \(B = \begin{pmatrix} 3 & -1\\1 & 1\end{pmatrix}\).

  1. (i) Find the single eigenvalue \(\lambda\) for \(B\) and the solution space to the homogeneous system of equations with coefficient matrix \(-\lambda I_2+B\). These are all of the eigenvectors associated to \(\lambda\).
  2. (ii) Take any \(v_2\in \mathbb{R}^2\) that is not an eigenvector of \(B\) associated to \(\lambda\) and set \(v_1 = (-\lambda I_2+B)\cdot v_2\).
  3. (iii) Verify that \(v_1\) is an eigenvector of \(B\).
  4. (iv) Let \(P\) denote the \(2\times 2\) matrix with columns \(v_1, v_2\). Calculate \(P^{-1}BP\). If done correctly you should get \(\begin{pmatrix} \lambda & 1\\0 & \lambda\end{pmatrix}\), the Jordan canonical form of \(B\).

Solution.

(i) The characteristic polynomial is

\[p_B(x) = (3-x)(1-x)+1 = x^2-4x+4 = (x-2)^2,\]

so \(B\) has the single repeated eigenvalue \(\lambda = 2\). Row-reducing \(B-2I_2\):

\[\begin{pmatrix} 1 & -1\\1 & -1\end{pmatrix} \xrightarrow{R_2 \leftarrow R_2 - R_1} \begin{pmatrix} 1 & -1\\0 & 0\end{pmatrix}.\]

The eigenspace is spanned by \(\begin{pmatrix} 1\\1\end{pmatrix}\); every eigenvector for \(\lambda = 2\) is a scalar multiple of this vector.

(ii) Choose \(v_2 = \begin{pmatrix} 1\\0\end{pmatrix}\), which is not a multiple of \(\begin{pmatrix} 1\\1\end{pmatrix}\) and hence not an eigenvector. Set

\[\begin{aligned}v_1 &= (B-2I_2)\,v_2 = \begin{pmatrix} 1 & -1\\1 & -1\end{pmatrix}\begin{pmatrix} 1\\0\end{pmatrix}\\&= \begin{pmatrix} 1\\1\end{pmatrix}.\end{aligned}\]

(iii) Verifying \(v_1\) is an eigenvector:

\[Bv_1 = \begin{pmatrix} 3 & -1\\1 & 1\end{pmatrix}\begin{pmatrix} 1\\1\end{pmatrix} = \begin{pmatrix} 2\\2\end{pmatrix} = 2\begin{pmatrix} 1\\1\end{pmatrix} = 2v_1.\]

(iv) Let \(P = \begin{pmatrix} 1 & 1\\1 & 0\end{pmatrix}\), so \(\det P = -1\) and \(P^{-1} = \begin{pmatrix} 0 & 1\\1 & -1\end{pmatrix}\). Then

\[\begin{aligned}P^{-1}BP &= \begin{pmatrix} 0 & 1\\1 & -1\end{pmatrix}\begin{pmatrix} 3 & -1\\1 & 1\end{pmatrix}\begin{pmatrix} 1 & 1\\1 & 0\end{pmatrix}\\&= \begin{pmatrix} 0 & 1\\1 & -1\end{pmatrix}\begin{pmatrix} 2 & 3\\2 & 1\end{pmatrix} = \begin{pmatrix} 2 & 1\\0 & 2\end{pmatrix}.\end{aligned}\]
Thursday, April 2

1. Given \(A = \begin{pmatrix} 3 & 1\\-1 & 5\end{pmatrix}\): Explain why \(A\) is not diagonalizable, and then find \(P\in \mathrm{M}_2(\mathbb{R})\) such that \(P^{-1}AP = J\), where \(J\) is in Jordan canonical form. Be sure to verify that \(P^{-1}AP = J\). Follow the steps outlined in class and in the Daily Update.

Solution. The characteristic polynomial is

\[p_A(x) = (3-x)(5-x)+1 = x^2-8x+16 = (x-4)^2,\]

so \(\lambda = 4\) is the only eigenvalue. Row-reducing \(A-4I_2\):

\[\begin{pmatrix} -1 & 1\\-1 & 1\end{pmatrix} \xrightarrow{R_2 \leftarrow R_2 - R_1} \begin{pmatrix} -1 & 1\\0 & 0\end{pmatrix} \xrightarrow{R_1 \leftarrow -R_1} \begin{pmatrix} 1 & -1\\0 & 0\end{pmatrix}.\]

The eigenspace is one-dimensional, spanned by \(\begin{pmatrix} 1\\1\end{pmatrix}\). Since the eigenspace has dimension \(1 < 2\), \(A\) is not diagonalizable.

To find the Jordan form, choose \(v_2 = \begin{pmatrix} 1\\0\end{pmatrix}\in \mathbb{R}^2\), which is not a multiple of \(\begin{pmatrix} 1\\1\end{pmatrix}\) and hence not an eigenvector. Set

\[v_1 = (A-4I_2)\,v_2 = \begin{pmatrix} -1 & 1\\-1 & 1\end{pmatrix}\begin{pmatrix} 1\\0\end{pmatrix} = \begin{pmatrix} -1\\-1\end{pmatrix}.\]

We verify \(v_1\) is an eigenvector: \(Av_1 = \begin{pmatrix} 3 & 1\\-1 & 5\end{pmatrix}\begin{pmatrix} -1\\-1\end{pmatrix} = \begin{pmatrix} -4\\-4\end{pmatrix} = 4v_1\).

Let \(P = \begin{pmatrix} -1 & 1\\-1 & 0\end{pmatrix}\), so \(\det P = 0-(-1) = 1\) and \(P^{-1} = \begin{pmatrix} 0 & -1\\1 & -1\end{pmatrix}\). We compute:

\[\begin{aligned} AP &= \begin{pmatrix} 3 & 1\\-1 & 5\end{pmatrix}\begin{pmatrix} -1 & 1\\-1 & 0\end{pmatrix} = \begin{pmatrix} -4 & 3\\-4 & -1\end{pmatrix},\\ P^{-1}AP &= \begin{pmatrix} 0 & -1\\1 & -1\end{pmatrix}\begin{pmatrix} -4 & 3\\-4 & -1\end{pmatrix} = \begin{pmatrix} 4 & 1\\0 & 4\end{pmatrix} = J. \end{aligned}\]

Thus \(P^{-1}AP = \begin{pmatrix} 4 & 1\\0 & 4\end{pmatrix}\) is the Jordan canonical form of \(A\).

2. As mentioned in class, we can do all of our previous matrix calculations over \(\mathbb{C}\), the complex numbers. For the matrix \(B = \begin{pmatrix} 0 & 1\\-1 & 0\end{pmatrix}\), find two distinct eigenvalues in \(\mathbb{C}\) and a diagonalizing matrix \(P\in \mathrm{M}_2(\mathbb{C})\) such that \(P^{-1}BP = D\), a diagonal matrix. The process for diagonalizing \(B\) over \(\mathbb{C}\) is exactly the same as we have done working over \(\mathbb{R}\), it's only the number system we are using that has changed. And: If you think about it, almost all of our calculations have been done with integers and rational numbers, so very often we have not used real numbers that are not rational numbers. Thus, almost all of our previous calculations have taken place in the number system \(\mathbb{Q}\). This has been purely to keep calculations by hand manageable.

Solution. The characteristic polynomial is

\[p_B(x) = (0-x)(0-x)+1 = x^2+1 = (x-i)(x+i),\]

so the two distinct eigenvalues in \(\mathbb{C}\) are \(\lambda_1 = i\) and \(\lambda_2 = -i\).

For \(\lambda_1 = i\): Row-reducing \(B-iI_2 = \begin{pmatrix} -i & 1\\-1 & -i\end{pmatrix}\). Note that \(R_2 = i\cdot R_1\) (since \(-1 = i\cdot(-i)\)), so this row-reduces to \(\begin{pmatrix} 1 & i\\0 & 0\end{pmatrix}\), giving eigenvector \(v_1 = \begin{pmatrix} 1\\i\end{pmatrix}\). Check: \(Bv_1 = \begin{pmatrix} 0 & 1\\-1 & 0\end{pmatrix}\begin{pmatrix} 1\\i\end{pmatrix} = \begin{pmatrix} i\\-1\end{pmatrix} = i\begin{pmatrix} 1\\i\end{pmatrix} = iv_1\).

For \(\lambda_2 = -i\): A similar calculation gives \(v_2 = \begin{pmatrix} 1\\-i\end{pmatrix}\). To see this:

\[Bv_2 = \begin{pmatrix} 0 & 1\\-1 & 0\end{pmatrix}\begin{pmatrix} 1\\-i\end{pmatrix} = \begin{pmatrix} -i\\-1\end{pmatrix} = -i\begin{pmatrix} 1\\-i\end{pmatrix} = -iv_2.\]

Let \(P = \begin{pmatrix} 1 & 1\\i & -i\end{pmatrix}\in \mathrm{M}_2(\mathbb{C})\). Then \(\det P = -i-i = -2i\), so

\[P^{-1} = \frac{1}{-2i}\begin{pmatrix} -i & -1\\-i & 1\end{pmatrix} = \begin{pmatrix} \frac{1}{2} & \frac{1}{2i}\\ \frac{1}{2} & -\frac{1}{2i}\end{pmatrix} = \begin{pmatrix} \frac{1}{2} & -\frac{i}{2}\\ \frac{1}{2} & \frac{i}{2}\end{pmatrix}.\]

We verify:

\[\begin{aligned} BP &= \begin{pmatrix} 0 & 1\\-1 & 0\end{pmatrix}\begin{pmatrix} 1 & 1\\i & -i\end{pmatrix} = \begin{pmatrix} i & -i\\-1 & -1\end{pmatrix},\\ P^{-1}BP &= \begin{pmatrix} \frac{1}{2} & -\frac{i}{2}\\ \frac{1}{2} & \frac{i}{2}\end{pmatrix}\begin{pmatrix} i & -i\\-1 & -1\end{pmatrix} = \begin{pmatrix} \frac{i}{2}+\frac{i}{2} & -\frac{i}{2}+\frac{i}{2}\\ \frac{i}{2}-\frac{i}{2} & -\frac{i}{2}-\frac{i}{2}\end{pmatrix} = \begin{pmatrix} i & 0\\0 & -i\end{pmatrix} = D. \end{aligned}\]

Bonus Problem 9. Let \(A\) be a real \(2\times 2\) matrix and assume that \(p_A(x)\) has a repeated real root \(\lambda\). Prove that \((A-\lambda\cdot I_2)^2 = 0_{2\times 2}\). Conclude that if \(v_2\) is not an eigenvector for \(A\), then \(v_1 = (A-\lambda\cdot I_2)v_2\) is an eigenvector for \(A\). With this, one can show that the same calculation done in class gives \(P^{-1}AP = \begin{pmatrix} \lambda & 1\\0 & \lambda\end{pmatrix}\), for \(P = [v_1\ v_2]\). Due Tuesday, April 7. (5 points)

Solution. Write \(A = \begin{pmatrix} a & b\\c & d\end{pmatrix}\). The characteristic polynomial is

\[p_A(x) = \det(A-xI_2) = (a-x)(d-x)-bc = x^2-(a+d)x+(ad-bc).\]

Since \(\lambda\) is a repeated root, \(p_A(x) = (x-\lambda)^2 = x^2-2\lambda x+\lambda^2\). Comparing coefficients gives

\[\begin{aligned} a+d &= 2\lambda \quad\quad\quad\quad \text{(T)}\\ ad-bc &= \lambda^2. \quad\quad\quad\quad \text{(D)} \end{aligned}\]

Now we compute \((A-\lambda I_2)^2\) directly. We have \(A-\lambda I_2 = \begin{pmatrix} a-\lambda & b\\c & d-\lambda\end{pmatrix}\), so

\[(A-\lambda I_2)^2 = \begin{pmatrix} (a-\lambda)^2+bc & b(a-\lambda)+b(d-\lambda)\\c(a-\lambda)+c(d-\lambda) & bc+(d-\lambda)^2\end{pmatrix}.\]

We check each entry using (T) and (D):

The (1,2) entry: \(b(a-\lambda)+b(d-\lambda) = b\bigl[(a-\lambda)+(d-\lambda)\bigr] = b(a+d-2\lambda) = b\cdot 0 = 0\), by (T).

The (2,1) entry: \(c(a-\lambda)+c(d-\lambda) = c(a+d-2\lambda) = c\cdot 0 = 0\), by (T).

The (1,1) entry: \((a-\lambda)^2+bc\). From (T), \(d = 2\lambda-a\), so \(d-\lambda = \lambda-a\) and thus \(a-\lambda = -(d-\lambda)\). From (D), \(bc = \lambda^2-ad = \lambda^2-a(2\lambda-a) = (a-\lambda)^2\). Therefore \((a-\lambda)^2+bc = (a-\lambda)^2-(a-\lambda)^2 = 0\).

The (2,2) entry: \(bc+(d-\lambda)^2\). By the same reasoning (swapping the roles of \(a\) and \(d\)), \(bc = (d-\lambda)^2\), so \(bc+(d-\lambda)^2 = 0\).

Therefore \((A-\lambda I_2)^2 = 0_{2\times 2}\). \(\square\)

Now suppose \(v_2\in \mathbb{R}^2\) is not an eigenvector of \(A\) for \(\lambda\), and set \(v_1 = (A-\lambda I_2)v_2\). Since \(v_2\) is not an eigenvector, \(v_1\neq \mathbf{0}\). Then:

\[(A-\lambda I_2)v_1 = (A-\lambda I_2)(A-\lambda I_2)v_2 = (A-\lambda I_2)^2 v_2 = 0_{2\times 2}\cdot v_2 = \mathbf{0}.\]

This means \(Av_1 = \lambda v_1\), so \(v_1\) is an eigenvector of \(A\) for \(\lambda\). \(\square\)

Tuesday, April 7

1. Let \(z_1 = a+bi\) and \(z_2 = c+di\). Prove the formulas \(\overline{z_1+z_2} = \overline{z_1}+\overline{z_2}\) and \(\overline{z_1z_2} = \overline{z_1}\cdot\overline{z_2}\).

Solution to 1. Let \(z_1 = a+bi\) and \(z_2 = c+di\), so \(\overline{z_1} = a-bi\) and \(\overline{z_2} = c-di\).

For the sum: \(z_1+z_2 = (a+c)+(b+d)i\), so

\[\overline{z_1+z_2} = (a+c)-(b+d)i = (a-bi)+(c-di) = \overline{z_1}+\overline{z_2}.\]

For the product: \(z_1z_2 = (ac-bd)+(ad+bc)i\), so \(\overline{z_1z_2} = (ac-bd)-(ad+bc)i\). On the other hand,

\[\overline{z_1}\cdot\overline{z_2} = (a-bi)(c-di) = (ac-bd)+(-ad-bc)i = (ac-bd)-(ad+bc)i.\]

These are equal, so \(\overline{z_1z_2} = \overline{z_1}\cdot\overline{z_2}\). \(\square\)

2. Find two square roots of \(i\).

Solution to 2. We seek \(z = a+bi\) with \(z^2 = i\). Expanding: \((a^2-b^2)+2abi = 0+i\). Comparing real and imaginary parts gives \(a^2-b^2 = 0\) and \(2ab = 1\). From the first equation \(a = \pm b\). Since \(2ab = 1 > 0\) we need \(a\) and \(b\) to have the same sign, so \(a = b\). Substituting into \(2ab = 1\) gives \(2a^2 = 1\), hence \(a = \pm\tfrac{1}{\sqrt{2}}\). The two square roots of \(i\) are

\[\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{2}}i \qquad \text{and} \qquad -\frac{1}{\sqrt{2}}-\frac{1}{\sqrt{2}}i.\]

One can verify: \(\left(\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{2}}i\right)^2 = \frac{1}{2}-\frac{1}{2}+\frac{2}{2}i = i\). \(\checkmark\)

3. Solve the following system of linear equations over \(\mathbb{C}\):

\[\begin{align*} 2ix+3y-4z &= 4\\ 6x-4iy-z &= 8i. \end{align*}\]

Solution to 3. We row-reduce the augmented matrix over \(\mathbb{C}\):

\[\left[\begin{array}{ccc|c} 2i & 3 & -4 & 4\\ 6 & -4i & -1 & 8i\end{array}\right] \xrightarrow{\frac{1}{2i}R_1} \left[\begin{array}{ccc|c} 1 & -\tfrac{3i}{2} & 2i & -2i\\ 6 & -4i & -1 & 8i\end{array}\right] \xrightarrow{R_2\leftarrow R_2-6R_1} \left[\begin{array}{ccc|c} 1 & -\tfrac{3i}{2} & 2i & -2i\\ 0 & 5i & -1-12i & 20i\end{array}\right].\]

Dividing \(R_2\) by \(5i\) (i.e., multiplying by \(\frac{-i}{5}\)):

\[\xrightarrow{\frac{1}{5i}R_2} \left[\begin{array}{ccc|c} 1 & -\tfrac{3i}{2} & 2i & -2i\\ 0 & 1 & \tfrac{-12+i}{5} & 4\end{array}\right] \xrightarrow{R_1\leftarrow R_1+\frac{3i}{2}R_2} \left[\begin{array}{ccc|c} 1 & 0 & \tfrac{3+16i}{10} & 4i\\ 0 & 1 & \tfrac{-12+i}{5} & 4\end{array}\right].\]

Setting \(z = t\in\mathbb{C}\) as a free parameter, the solution set is

\[\left\{\begin{pmatrix} x\\y\\z\end{pmatrix} = \begin{pmatrix} 4i\\4\\0\end{pmatrix} + t\begin{pmatrix} -\dfrac{3+16i}{10}\\[6pt]\dfrac{12-i}{5}\\[4pt]1\end{pmatrix}\ \bigg|\ t\in\mathbb{C}\right\}.\]
Thursday, April 9

For these problems we are working over \(\mathbb{C}\).

1. Find the Jordan canonical form \(J\) for \(A = \begin{pmatrix} 0 & 25\\1 & 10i\end{pmatrix}\) and the matrix \(P\) satisfying \(P^{-1}AP = J\). Be sure to check this relation by direct calculation.

Solution to 1. The characteristic polynomial is

\[p_A(x) = \det(A-xI_2) = (0-x)(10i-x)-25 = x^2-10ix-25 = (x-5i)^2,\]

so \(A\) has the single repeated eigenvalue \(\lambda = 5i\). Row-reducing \(A-5iI_2\):

\[\begin{pmatrix} -5i & 25\\1 & 5i\end{pmatrix} \xrightarrow{R_1\leftarrow\frac{-1}{5i}R_1} \begin{pmatrix} 1 & 5i\\1 & 5i\end{pmatrix} \xrightarrow{R_2\leftarrow R_2-R_1} \begin{pmatrix} 1 & 5i\\0 & 0\end{pmatrix}.\]

The eigenspace is spanned by \(v_1 = \begin{pmatrix} -5i\\1\end{pmatrix}\). Since it is one-dimensional, \(A\) is not diagonalizable over \(\mathbb{C}\).

Choose \(v_2 = \begin{pmatrix} 1\\0\end{pmatrix}\), which is not a multiple of \(v_1\) and hence not an eigenvector. Set

\[v_1 = (A-5iI_2)v_2 = \begin{pmatrix} -5i & 25\\1 & 5i\end{pmatrix}\begin{pmatrix} 1\\0\end{pmatrix} = \begin{pmatrix} -5i\\1\end{pmatrix}. \checkmark\]

Let \(P = \begin{pmatrix} -5i & 1\\1 & 0\end{pmatrix}\). Then \(\det P = 0-1 = -1\), so \(P^{-1} = \frac{1}{-1}\begin{pmatrix} 0 & -1\\-1 & -5i\end{pmatrix} = \begin{pmatrix} 0 & 1\\1 & 5i\end{pmatrix}\). We verify:

\[\begin{aligned} AP &= \begin{pmatrix} 0 & 25\\1 & 10i\end{pmatrix}\begin{pmatrix} -5i & 1\\1 & 0\end{pmatrix} = \begin{pmatrix} 25 & 0\\-5i+10i & 1\end{pmatrix} = \begin{pmatrix} 25 & 0\\5i & 1\end{pmatrix},\\[4pt] P^{-1}AP &= \begin{pmatrix} 0 & 1\\1 & 5i\end{pmatrix}\begin{pmatrix} 25 & 0\\5i & 1\end{pmatrix} = \begin{pmatrix} 5i & 1\\0 & 5i\end{pmatrix} = J.\end{aligned}\]

Thus \(J = \begin{pmatrix} 5i & 1\\0 & 5i\end{pmatrix}\) is the Jordan canonical form of \(A\).

2. Given \(B = \begin{pmatrix} 1 & 2i\\2i & 2\end{pmatrix}\), note that \(B\) is symmetric. Show that \(B\) is diagonalizable, but not orthogonally diagonalizable over \(\mathbb{C}\).

Solution to 2. The characteristic polynomial is

\[p_B(x) = (1-x)(2-x)-(2i)^2 = x^2-3x+2+4 = x^2-3x+6.\]

The discriminant is \(9-24 = -15 < 0\), so the two eigenvalues are the distinct complex numbers

\[\lambda_1 = \frac{3+i\sqrt{15}}{2}, \qquad \lambda_2 = \frac{3-i\sqrt{15}}{2}.\]

Since \(B\) has two distinct eigenvalues, it is diagonalizable over \(\mathbb{C}\).

To find eigenvectors, we use row 2 of \(B-\lambda_k I_2\), i.e., \(2i\cdot v_1+(2-\lambda_k)v_2 = 0\).

For \(\lambda_1\): \(2iv_1+\frac{1-i\sqrt{15}}{2}\,v_2 = 0\), giving eigenvector \(w_1 = \begin{pmatrix} \sqrt{15}+i\\4\end{pmatrix}\).

For \(\lambda_2\): similarly, \(w_2 = \begin{pmatrix} -\sqrt{15}+i\\4\end{pmatrix}\).

To show \(B\) is not orthogonally diagonalizable, we compute the Hermitian inner product of \(w_1\) and \(w_2\):

\[\langle w_1, w_2\rangle = \overline{(\sqrt{15}+i)}(-\sqrt{15}+i)+\overline{4}\cdot 4 = (\sqrt{15}-i)(-\sqrt{15}+i)+16.\]

Expanding: \((\sqrt{15}-i)(-\sqrt{15}+i) = -15+i\sqrt{15}+i\sqrt{15}-i^2 = -15+2i\sqrt{15}+1 = -14+2i\sqrt{15}\). Therefore

\[\langle w_1, w_2\rangle = -14+2i\sqrt{15}+16 = 2+2i\sqrt{15} \neq 0.\]

Since the eigenvectors for distinct eigenvalues are not orthogonal with respect to the standard Hermitian inner product, \(B\) cannot be orthogonally diagonalized over \(\mathbb{C}\). \(\square\)

3. Given \(C = \begin{pmatrix} 1 & -2i\\2i & 2\end{pmatrix}\), note that \(C = \overline{C}^t\). Show that \(C\) is orthogonally diagonalizable over \(\mathbb{C}\), i.e., there exists \(Q\in \mathrm{M}_2(\mathbb{C})\) with \(Q^{-1}CQ\) a diagonal matrix, and the columns of \(Q\) are orthogonal and have length one.

Solution to 3. We first verify that \(C = \overline{C}^t\): \(\overline{C}^t = \overline{\begin{pmatrix} 1 & -2i\\2i & 2\end{pmatrix}}^t = \begin{pmatrix} 1 & 2i\\-2i & 2\end{pmatrix}^t = \begin{pmatrix} 1 & -2i\\2i & 2\end{pmatrix} = C\). \(\checkmark\)

The characteristic polynomial is

\[p_C(x) = (1-x)(2-x)-(-2i)(2i) = x^2-3x+2-4 = x^2-3x-2.\]

The discriminant is \(9+8 = 17\), giving two distinct real eigenvalues

\[\lambda_1 = \frac{3+\sqrt{17}}{2}, \qquad \lambda_2 = \frac{3-\sqrt{17}}{2}.\]

For each \(\lambda\), row 2 of \(C-\lambda I_2\) gives \(2iv_1+(2-\lambda)v_2 = 0\), so \(v_1 = \frac{(\lambda-2)}{2i}v_2\).

For \(\lambda_1\): \(\frac{\lambda_1-2}{2i} = \frac{(\sqrt{17}-1)/2}{2i} = \frac{(\sqrt{17}-1)i}{-4}\), giving eigenvector \(u_1 = \begin{pmatrix} -(\sqrt{17}-1)i\\4\end{pmatrix}\).

For \(\lambda_2\): \(\frac{\lambda_2-2}{2i} = \frac{(-1-\sqrt{17})/2}{2i} = \frac{(1+\sqrt{17})i}{4}\), giving eigenvector \(u_2 = \begin{pmatrix} (1+\sqrt{17})i\\4\end{pmatrix}\).

Orthogonality. We compute:

\[\langle u_1,u_2\rangle = \overline{-(\sqrt{17}-1)i}\cdot(1+\sqrt{17})i+\bar{4}\cdot 4 = (\sqrt{17}-1)i\cdot(1+\sqrt{17})i+16.\]

Now \((\sqrt{17}-1)(1+\sqrt{17}) = 17-1 = 16\), so \((\sqrt{17}-1)i\cdot(1+\sqrt{17})i = 16i^2 = -16\). Therefore \(\langle u_1,u_2\rangle = -16+16 = 0\). \(\checkmark\)

Norms.

\[\|u_1\|^2 = (\sqrt{17}-1)^2+16 = 34-2\sqrt{17}, \qquad \|u_2\|^2 = (\sqrt{17}+1)^2+16 = 34+2\sqrt{17}.\]

Setting \(q_k = u_k/\|u_k\|\), the matrix

\[Q = \begin{pmatrix} \dfrac{-(\sqrt{17}-1)i}{\sqrt{34-2\sqrt{17}}} & \dfrac{(1+\sqrt{17})i}{\sqrt{34+2\sqrt{17}}}\\[10pt] \dfrac{4}{\sqrt{34-2\sqrt{17}}} & \dfrac{4}{\sqrt{34+2\sqrt{17}}}\end{pmatrix}\]

has orthonormal columns and satisfies \(Q^{-1}CQ = \overline{Q}^t CQ = \begin{pmatrix} \lambda_1 & 0\\0 & \lambda_2\end{pmatrix}\). \(\square\)

Tuesday, April 21

1. Let \(A = \begin{pmatrix} a & b & c\\d & e & f\\g & h & i\end{pmatrix}\). Calculate \(|A|\) first by expanding along the second row, and then by expanding along the third column, checking that your two answers agree.

Solution to 1. Let \(A = \begin{pmatrix} a & b & c\\d & e & f\\g & h & i\end{pmatrix}\).

Expansion along the second row. Using cofactors with the sign pattern \(-,+,-\) for row 2:

\[\begin{align*} |A| &= -d\,\begin{vmatrix}b&c\\h&i\end{vmatrix} +e\,\begin{vmatrix}a&c\\g&i\end{vmatrix} -f\,\begin{vmatrix}a&b\\g&h\end{vmatrix}\\ &= -d(bi-ch)+e(ai-cg)-f(ah-bg)\\ &= -dbi+dch+eai-ecg-fah+fbg. \end{align*}\]

Expansion along the third column. Using cofactors with the sign pattern \(+,-,+\) for column 3:

\[\begin{align*} |A| &= c\,\begin{vmatrix}d&e\\g&h\end{vmatrix} -f\,\begin{vmatrix}a&b\\g&h\end{vmatrix} +i\,\begin{vmatrix}a&b\\d&e\end{vmatrix}\\ &= c(dh-eg)-f(ah-bg)+i(ae-bd)\\ &= cdh-ceg-fah+fbg+iae-ibd. \end{align*}\]

Both expressions equal \(aei-afh-bdi+bfg+cdh-ceg\), confirming they agree. \(\square\)

2. Use elementary row operations to calculate \(|B|\), for \(B = \begin{pmatrix} 1 & 1 & 1 & 1\\1 & 2 & 3 & 4\\1 & 3 & 6 & 10\\1 & 4 & 10 & 20\end{pmatrix}\).

Solution to 2. We subtract \(R_1\) from each subsequent row (which does not change the determinant) to introduce zeros in column 1:

\[B \xrightarrow{R_k \leftarrow R_k - R_1,\ k=2,3,4} \begin{pmatrix} 1 & 1 & 1 & 1\\0 & 1 & 2 & 3\\0 & 2 & 5 & 9\\0 & 3 & 9 & 19\end{pmatrix}.\]

Now subtract multiples of \(R_2\) from \(R_3\) and \(R_4\):

\[\xrightarrow{\substack{R_3\leftarrow R_3-2R_2\\R_4\leftarrow R_4-3R_2}} \begin{pmatrix} 1 & 1 & 1 & 1\\0 & 1 & 2 & 3\\0 & 0 & 1 & 3\\0 & 0 & 3 & 10\end{pmatrix}.\]

Finally, subtract \(3R_3\) from \(R_4\):

\[\xrightarrow{R_4\leftarrow R_4-3R_3} \begin{pmatrix} 1 & 1 & 1 & 1\\0 & 1 & 2 & 3\\0 & 0 & 1 & 3\\0 & 0 & 0 & 1\end{pmatrix}.\]

This upper triangular matrix has all diagonal entries equal to \(1\). Since no row swaps were performed, \(|B| = 1\cdot 1\cdot 1\cdot 1 = 1\). \(\square\)

Bonus Problem 10. Prove that the determinant of an upper triangular \(n\times n\) matrix is the product of its diagonal entries. Due Thursday, April 23. (3 points)

Solution to Bonus Problem 10. Let \(U = (u_{ij})\) be an upper triangular \(n\times n\) matrix, so \(u_{ij}=0\) whenever \(i>j\). We prove by induction on \(n\) that \(\det U = u_{11}u_{22}\cdots u_{nn}\).

Base case (\(n=1\)): \(\det(u_{11}) = u_{11}\). \(\checkmark\)

Inductive step: Assume the result holds for all upper triangular \((n-1)\times(n-1)\) matrices. Expand \(\det U\) along the first column. Since \(u_{i1}=0\) for \(i\geq 2\), only the \((1,1)\)-entry contributes:

\[\det U = u_{11}\cdot M_{11},\]

where \(M_{11}\) is the \((1,1)\)-minor, i.e., the determinant of the \((n-1)\times(n-1)\) matrix obtained by deleting row 1 and column 1. This submatrix is again upper triangular with diagonal entries \(u_{22},\ldots,u_{nn}\). By the inductive hypothesis, \(M_{11} = u_{22}\cdots u_{nn}\). Therefore \(\det U = u_{11}\cdot u_{22}\cdots u_{nn}\), which completes the induction. \(\square\)

Thursday, April 23

1. Let \(A = \begin{pmatrix} 3 & 1 & 1\\1 & 3 & 1\\1 & 1 & 3\end{pmatrix}\). Verify that \(p_A(x) = (x-2)^2(x-5)\) so that \(2\) is a repeated root, i.e., a repeated eigenvalue. Then show that the eigenspace for \(2\) contains two linearly independent vectors \(v_1, v_2\), while the eigenspace for \(5\) has one independent vector \(v_3\) and the vectors \(v_1, v_2, v_3\) are linearly independent. Upon setting \(P = [v_1\ v_2\ v_3]\) verify that \(P^{-1}AP = \begin{pmatrix} 2 & 0 & 0\\0 & 2 & 0\\0 & 0 & 5\end{pmatrix}\).

Solution to 1. We compute \(p_A(x) = \det(A-xI_3)\):

\[A - xI_3 = \begin{pmatrix} 3-x & 1 & 1\\1 & 3-x & 1\\1 & 1 & 3-x\end{pmatrix}.\]
\[\begin{align*} \det(A-xI_3) &= (3-x)\bigl[(3-x)^2-1\bigr] - 1\bigl[(3-x)-1\bigr] + 1\bigl[1-(3-x)\bigr]\\ &= (3-x)\bigl[(3-x)^2-1\bigr] - (2-x) - (2-x)\\ &= (3-x)(4-x)(2-x) - 2(2-x)\\ &= (2-x)\bigl[(3-x)(4-x)-2\bigr]\\ &= (2-x)(x^2-7x+10) = (2-x)(x-2)(x-5) = -(x-2)^2(x-5). \end{align*}\]

Hence \(p_A(x) = (x-2)^2(x-5)\), confirming eigenvalues \(\lambda = 2\) (multiplicity 2) and \(\lambda = 5\).

Eigenspace for \(\lambda=2\). Row-reduce \(A-2I_3\):

\[\begin{pmatrix} 1 & 1 & 1\\1 & 1 & 1\\1 & 1 & 1\end{pmatrix} \xrightarrow{R_2\leftarrow R_2-R_1,\ R_3\leftarrow R_3-R_1} \begin{pmatrix} 1 & 1 & 1\\0 & 0 & 0\\0 & 0 & 0\end{pmatrix}.\]

The eigenspace is \(\{(-s-t,s,t)\mid s,t\in\mathbb{R}\}\), spanned by two linearly independent vectors:

\[v_1 = \begin{pmatrix} -1\\1\\0\end{pmatrix}, \qquad v_2 = \begin{pmatrix} -1\\0\\1\end{pmatrix}.\]

Eigenspace for \(\lambda=5\). Row-reducing \(A-5I_3\) gives the eigenspace \(\{(t,t,t)\mid t\in\mathbb{R}\}\), spanned by \(v_3 = \begin{pmatrix} 1\\1\\1\end{pmatrix}\).

Linear independence of \(v_1,v_2,v_3\). Set \(P = [v_1\ v_2\ v_3] = \begin{pmatrix} -1 & -1 & 1\\1 & 0 & 1\\0 & 1 & 1\end{pmatrix}\). Then \(\det P = 3 \neq 0\), so \(v_1,v_2,v_3\) are linearly independent.

Verification. Since \(Av_1 = 2v_1\), \(Av_2=2v_2\), and \(Av_3=5v_3\), we have \(AP = P\begin{pmatrix} 2&0&0\\0&2&0\\0&0&5\end{pmatrix}\), so \(P^{-1}AP = \begin{pmatrix} 2&0&0\\0&2&0\\0&0&5\end{pmatrix}\). \(\square\)

2. Let \(B = \begin{pmatrix} 2 & 1 & 0\\0 & 2 & 0\\0 & 0 & 5\end{pmatrix}\), so that \(p_B(x) = (x-2)^2(x-5)\). Show that the eigenspace associated to \(2\) has just one linearly independent vector. This accounts for the failure of \(B\) to be diagonalizable. Intuitively this should be the case, since the upper \(2\times 2\) left corner of \(B\) is a JCF matrix.

Solution to 2. We row-reduce \(B - 2I_3\):

\[B - 2I_3 = \begin{pmatrix} 0 & 1 & 0\\0 & 0 & 0\\0 & 0 & 3\end{pmatrix} \xrightarrow{\frac{1}{3}R_3} \begin{pmatrix} 0 & 1 & 0\\0 & 0 & 0\\0 & 0 & 1\end{pmatrix} \xrightarrow{R_1 \leftrightarrow R_2} \begin{pmatrix} 0 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\end{pmatrix}.\]

The pivot columns are 2 and 3, leaving only \(x_1\) as a free variable. The eigenspace is therefore \(\left\{t\begin{pmatrix} 1\\0\\0\end{pmatrix}\ \bigg|\ t\in\mathbb{R}\right\}\), which is one-dimensional. Since the eigenvalue \(\lambda=2\) has algebraic multiplicity \(2\) but geometric multiplicity \(1\), the matrix \(B\) does not have a basis of eigenvectors and is therefore not diagonalizable. This is consistent with the fact that the upper-left \(2\times 2\) block \(\begin{pmatrix} 2&1\\0&2\end{pmatrix}\) is already in Jordan canonical form with a non-trivial Jordan block. \(\square\)

Tuesday, April 28

1. Show that the vectors \(v_1 = \begin{pmatrix} 1\\1\\0\end{pmatrix}\), \(v_2 = \begin{pmatrix} 1\\0\\1\end{pmatrix}\), \(v_3 = \begin{pmatrix} 0\\1\\1\end{pmatrix}\) form a basis for \(\mathbb{R}^3\) in two ways: (i) First show that the determinant of the matrix whose columns are \(v_1, v_2, v_3\) is non-zero and (ii) Second, show directly that \(v_1, v_2, v_3\) are linearly independent and span \(\mathbb{R}^3\). For spanning, show that the vector \(\begin{pmatrix} a\\b\\c\end{pmatrix}\) is a linear combination of \(v_1, v_2, v_3\).

2. For the matrices \(A_2 = \begin{pmatrix} 1 & 1\\1 & 1\end{pmatrix}\) and \(A_3 = \begin{pmatrix} 1 & 1 & 1\\1 & 1 & 1\\1 & 1 & 1\end{pmatrix}\), find the characteristic polynomials and verify that the matrices are diagonalizable. Find a diagonalizing matrix in each case. Can you guess the characteristic polynomial for \(A_4 = \begin{pmatrix} 1 & 1 & 1 & 1\\1 & 1 & 1 & 1\\1 & 1 & 1 & 1\\1 & 1 & 1 & 1\end{pmatrix}\)? What about \(A_n\), the \(n\times n\) matrix all of whose entries equal 1?

Bonus Problem 11. Let \(A\) be an \(n\times n\) matrix over \(\mathbb{R}\) (or \(\mathbb{C}\)), and suppose \(\lambda_1, \ldots, \lambda_r\) are distinct eigenvalues of \(A\). Let \(v_1, \ldots, v_r\) be any eigenvectors of \(A\) corresponding to \(\lambda_1, \ldots, \lambda_r\), respectively. Prove by induction on \(r\) that \(v_1, \ldots, v_r\) are linearly independent. Due Tuesday, May 5. (3 points)

Thursday, April 30

1. Verify the vector space axioms for \(P_3(\mathbb{R})\), the set of real polynomials of degree less than or equal to three.

2. Verify the vector space axioms for \(\mathrm{M}_2(\mathbb{R})\), the set of \(2\times 2\) real matrices. No doubt you'll observe that your calculations for this problem are almost identical to those in problem 1.